00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 638 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3298 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.183 > git --version # 'git version 2.39.2' 00:00:00.183 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.193 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.852 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.863 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.876 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:06.876 > git config core.sparsecheckout # timeout=10 00:00:06.885 > git read-tree -mu HEAD # timeout=10 00:00:06.900 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:06.915 Commit message: "packer: Add bios builder" 00:00:06.915 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:07.012 [Pipeline] Start of Pipeline 00:00:07.025 [Pipeline] library 00:00:07.026 Loading library shm_lib@master 00:00:07.027 Library shm_lib@master is cached. Copying from home. 00:00:07.044 [Pipeline] node 00:00:07.055 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.056 [Pipeline] { 00:00:07.069 [Pipeline] catchError 00:00:07.071 [Pipeline] { 00:00:07.084 [Pipeline] wrap 00:00:07.095 [Pipeline] { 00:00:07.104 [Pipeline] stage 00:00:07.107 [Pipeline] { (Prologue) 00:00:07.288 [Pipeline] sh 00:00:07.567 + logger -p user.info -t JENKINS-CI 00:00:07.585 [Pipeline] echo 00:00:07.587 Node: WFP21 00:00:07.594 [Pipeline] sh 00:00:07.893 [Pipeline] setCustomBuildProperty 00:00:07.907 [Pipeline] echo 00:00:07.908 Cleanup processes 00:00:07.914 [Pipeline] sh 00:00:08.196 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.196 781505 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.209 [Pipeline] sh 00:00:08.491 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.491 ++ grep -v 'sudo pgrep' 00:00:08.491 ++ awk '{print $1}' 00:00:08.491 + sudo kill -9 00:00:08.491 + true 00:00:08.506 [Pipeline] cleanWs 00:00:08.515 [WS-CLEANUP] Deleting project workspace... 00:00:08.515 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.522 [WS-CLEANUP] done 00:00:08.526 [Pipeline] setCustomBuildProperty 00:00:08.540 [Pipeline] sh 00:00:08.824 + sudo git config --global --replace-all safe.directory '*' 00:00:08.914 [Pipeline] httpRequest 00:00:08.969 [Pipeline] echo 00:00:08.971 Sorcerer 10.211.164.101 is alive 00:00:08.981 [Pipeline] httpRequest 00:00:08.986 HttpMethod: GET 00:00:08.987 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.987 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:09.007 Response Code: HTTP/1.1 200 OK 00:00:09.007 Success: Status code 200 is in the accepted range: 200,404 00:00:09.008 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:29.473 [Pipeline] sh 00:00:29.759 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:29.777 [Pipeline] httpRequest 00:00:29.805 [Pipeline] echo 00:00:29.808 Sorcerer 10.211.164.101 is alive 00:00:29.818 [Pipeline] httpRequest 00:00:29.824 HttpMethod: GET 00:00:29.824 URL: http://10.211.164.101/packages/spdk_cac68eec01afd99e79d97b2a93835888569d930b.tar.gz 00:00:29.825 Sending request to url: http://10.211.164.101/packages/spdk_cac68eec01afd99e79d97b2a93835888569d930b.tar.gz 00:00:29.831 Response Code: HTTP/1.1 200 OK 00:00:29.832 Success: Status code 200 is in the accepted range: 200,404 00:00:29.832 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_cac68eec01afd99e79d97b2a93835888569d930b.tar.gz 00:01:31.835 [Pipeline] sh 00:01:32.121 + tar --no-same-owner -xf spdk_cac68eec01afd99e79d97b2a93835888569d930b.tar.gz 00:01:34.671 [Pipeline] sh 00:01:34.956 + git -C spdk log --oneline -n5 00:01:34.956 cac68eec0 autotest: reduce RAID tests runs 00:01:34.956 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:34.956 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:34.956 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:34.956 d005e023b raid: fix empty slot not updated in sb after resize 00:01:34.971 [Pipeline] withCredentials 00:01:34.980 > git --version # timeout=10 00:01:34.991 > git --version # 'git version 2.39.2' 00:01:35.008 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:35.010 [Pipeline] { 00:01:35.016 [Pipeline] retry 00:01:35.017 [Pipeline] { 00:01:35.030 [Pipeline] sh 00:01:35.309 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:35.322 [Pipeline] } 00:01:35.346 [Pipeline] // retry 00:01:35.352 [Pipeline] } 00:01:35.374 [Pipeline] // withCredentials 00:01:35.384 [Pipeline] httpRequest 00:01:35.404 [Pipeline] echo 00:01:35.407 Sorcerer 10.211.164.101 is alive 00:01:35.417 [Pipeline] httpRequest 00:01:35.422 HttpMethod: GET 00:01:35.422 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:35.423 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:35.424 Response Code: HTTP/1.1 200 OK 00:01:35.425 Success: Status code 200 is in the accepted range: 200,404 00:01:35.425 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:39.948 [Pipeline] sh 00:01:40.231 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:41.623 [Pipeline] sh 00:01:41.908 + git -C dpdk log --oneline -n5 00:01:41.908 eeb0605f11 version: 23.11.0 00:01:41.908 238778122a doc: update release notes for 23.11 00:01:41.908 46aa6b3cfc doc: fix description of RSS features 00:01:41.908 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:41.908 7e421ae345 devtools: support skipping forbid rule check 00:01:41.919 [Pipeline] } 00:01:41.938 [Pipeline] // stage 00:01:41.947 [Pipeline] stage 00:01:41.949 [Pipeline] { (Prepare) 00:01:41.971 [Pipeline] writeFile 00:01:41.989 [Pipeline] sh 00:01:42.272 + logger -p user.info -t JENKINS-CI 00:01:42.310 [Pipeline] sh 00:01:42.593 + logger -p user.info -t JENKINS-CI 00:01:42.605 [Pipeline] sh 00:01:42.888 + cat autorun-spdk.conf 00:01:42.888 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.888 SPDK_TEST_NVMF=1 00:01:42.888 SPDK_TEST_NVME_CLI=1 00:01:42.888 SPDK_TEST_NVMF_NICS=mlx5 00:01:42.888 SPDK_RUN_UBSAN=1 00:01:42.888 NET_TYPE=phy 00:01:42.888 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.888 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:42.895 RUN_NIGHTLY=1 00:01:42.900 [Pipeline] readFile 00:01:42.926 [Pipeline] withEnv 00:01:42.928 [Pipeline] { 00:01:42.942 [Pipeline] sh 00:01:43.226 + set -ex 00:01:43.226 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:43.226 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:43.226 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.226 ++ SPDK_TEST_NVMF=1 00:01:43.226 ++ SPDK_TEST_NVME_CLI=1 00:01:43.226 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:43.226 ++ SPDK_RUN_UBSAN=1 00:01:43.226 ++ NET_TYPE=phy 00:01:43.226 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:43.226 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:43.226 ++ RUN_NIGHTLY=1 00:01:43.226 + case $SPDK_TEST_NVMF_NICS in 00:01:43.226 + DRIVERS=mlx5_ib 00:01:43.226 + [[ -n mlx5_ib ]] 00:01:43.226 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:43.226 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:49.794 rmmod: ERROR: Module irdma is not currently loaded 00:01:49.794 rmmod: ERROR: Module i40iw is not currently loaded 00:01:49.794 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:49.794 + true 00:01:49.794 + for D in $DRIVERS 00:01:49.794 + sudo modprobe mlx5_ib 00:01:49.794 + exit 0 00:01:49.804 [Pipeline] } 00:01:49.822 [Pipeline] // withEnv 00:01:49.828 [Pipeline] } 00:01:49.846 [Pipeline] // stage 00:01:49.857 [Pipeline] catchError 00:01:49.859 [Pipeline] { 00:01:49.874 [Pipeline] timeout 00:01:49.875 Timeout set to expire in 1 hr 0 min 00:01:49.877 [Pipeline] { 00:01:49.892 [Pipeline] stage 00:01:49.895 [Pipeline] { (Tests) 00:01:49.911 [Pipeline] sh 00:01:50.195 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:50.195 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:50.195 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:50.195 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:50.195 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:50.195 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:50.195 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:50.195 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:50.195 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:50.195 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:50.195 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:50.195 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:50.195 + source /etc/os-release 00:01:50.195 ++ NAME='Fedora Linux' 00:01:50.195 ++ VERSION='38 (Cloud Edition)' 00:01:50.195 ++ ID=fedora 00:01:50.195 ++ VERSION_ID=38 00:01:50.195 ++ VERSION_CODENAME= 00:01:50.195 ++ PLATFORM_ID=platform:f38 00:01:50.195 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:50.195 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.195 ++ LOGO=fedora-logo-icon 00:01:50.195 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:50.195 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.195 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:50.195 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.195 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.195 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.195 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:50.195 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.195 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:50.195 ++ SUPPORT_END=2024-05-14 00:01:50.195 ++ VARIANT='Cloud Edition' 00:01:50.195 ++ VARIANT_ID=cloud 00:01:50.195 + uname -a 00:01:50.195 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:50.195 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:53.485 Hugepages 00:01:53.485 node hugesize free / total 00:01:53.485 node0 1048576kB 0 / 0 00:01:53.485 node0 2048kB 0 / 0 00:01:53.485 node1 1048576kB 0 / 0 00:01:53.485 node1 2048kB 0 / 0 00:01:53.485 00:01:53.485 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.485 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:53.485 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:53.485 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:53.485 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:53.485 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:53.485 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:53.485 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:53.485 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:53.485 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:53.485 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:53.485 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:53.485 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:53.485 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:53.485 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:53.485 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:53.485 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:53.485 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:53.485 + rm -f /tmp/spdk-ld-path 00:01:53.485 + source autorun-spdk.conf 00:01:53.485 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.485 ++ SPDK_TEST_NVMF=1 00:01:53.485 ++ SPDK_TEST_NVME_CLI=1 00:01:53.485 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:53.485 ++ SPDK_RUN_UBSAN=1 00:01:53.485 ++ NET_TYPE=phy 00:01:53.485 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:53.485 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:53.485 ++ RUN_NIGHTLY=1 00:01:53.485 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.485 + [[ -n '' ]] 00:01:53.485 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:53.485 + for M in /var/spdk/build-*-manifest.txt 00:01:53.485 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.485 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:53.485 + for M in /var/spdk/build-*-manifest.txt 00:01:53.485 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.485 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:53.485 ++ uname 00:01:53.485 + [[ Linux == \L\i\n\u\x ]] 00:01:53.485 + sudo dmesg -T 00:01:53.485 + sudo dmesg --clear 00:01:53.485 + dmesg_pid=783165 00:01:53.485 + [[ Fedora Linux == FreeBSD ]] 00:01:53.485 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.485 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.485 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.485 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:53.485 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:53.485 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.485 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.485 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.485 + sudo dmesg -Tw 00:01:53.485 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.485 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.485 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.485 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.485 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.485 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.485 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.485 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.485 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:53.485 Test configuration: 00:01:53.485 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.485 SPDK_TEST_NVMF=1 00:01:53.485 SPDK_TEST_NVME_CLI=1 00:01:53.485 SPDK_TEST_NVMF_NICS=mlx5 00:01:53.485 SPDK_RUN_UBSAN=1 00:01:53.485 NET_TYPE=phy 00:01:53.485 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:53.485 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:53.485 RUN_NIGHTLY=1 20:22:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:53.485 20:22:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.485 20:22:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.485 20:22:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.485 20:22:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.485 20:22:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.485 20:22:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.485 20:22:41 -- paths/export.sh@5 -- $ export PATH 00:01:53.485 20:22:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.485 20:22:41 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:53.485 20:22:41 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:53.485 20:22:41 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722018161.XXXXXX 00:01:53.485 20:22:41 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722018161.e2TsXW 00:01:53.485 20:22:41 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:53.485 20:22:41 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:01:53.485 20:22:41 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:53.485 20:22:41 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:53.485 20:22:41 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:53.486 20:22:41 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.486 20:22:41 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:53.486 20:22:41 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:53.486 20:22:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.486 20:22:41 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:53.486 20:22:41 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:53.486 20:22:41 -- pm/common@17 -- $ local monitor 00:01:53.486 20:22:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.486 20:22:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.486 20:22:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.486 20:22:41 -- pm/common@21 -- $ date +%s 00:01:53.486 20:22:41 -- pm/common@21 -- $ date +%s 00:01:53.486 20:22:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.486 20:22:41 -- pm/common@25 -- $ sleep 1 00:01:53.486 20:22:41 -- pm/common@21 -- $ date +%s 00:01:53.486 20:22:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722018161 00:01:53.486 20:22:41 -- pm/common@21 -- $ date +%s 00:01:53.486 20:22:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722018161 00:01:53.486 20:22:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722018161 00:01:53.486 20:22:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1722018161 00:01:53.486 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722018161_collect-vmstat.pm.log 00:01:53.486 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722018161_collect-cpu-load.pm.log 00:01:53.486 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722018161_collect-cpu-temp.pm.log 00:01:53.486 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1722018161_collect-bmc-pm.bmc.pm.log 00:01:54.425 20:22:42 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:54.425 20:22:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.425 20:22:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.425 20:22:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:54.425 20:22:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.425 Fri Jul 26 06:22:42 PM UTC 2024 00:01:54.425 20:22:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.425 v24.09-pre-322-gcac68eec0 00:01:54.425 20:22:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:54.425 20:22:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.425 20:22:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.425 20:22:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.425 20:22:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.425 20:22:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.425 ************************************ 00:01:54.425 START TEST ubsan 00:01:54.425 ************************************ 00:01:54.425 20:22:42 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:54.425 using ubsan 00:01:54.425 00:01:54.425 real 0m0.000s 00:01:54.425 user 0m0.000s 00:01:54.425 sys 0m0.000s 00:01:54.425 20:22:42 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:54.425 20:22:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.425 ************************************ 00:01:54.425 END TEST ubsan 00:01:54.425 ************************************ 00:01:54.425 20:22:42 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:54.425 20:22:42 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:54.425 20:22:42 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:54.425 20:22:42 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:54.425 20:22:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.425 20:22:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.685 ************************************ 00:01:54.685 START TEST build_native_dpdk 00:01:54.685 ************************************ 00:01:54.685 20:22:42 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:54.685 20:22:42 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:54.685 20:22:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:54.685 20:22:43 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:54.685 20:22:43 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:54.685 20:22:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:54.685 20:22:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:54.685 20:22:43 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:54.685 20:22:43 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:54.685 20:22:43 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:54.686 eeb0605f11 version: 23.11.0 00:01:54.686 238778122a doc: update release notes for 23.11 00:01:54.686 46aa6b3cfc doc: fix description of RSS features 00:01:54.686 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:54.686 7e421ae345 devtools: support skipping forbid rule check 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:54.686 patching file config/rte_config.h 00:01:54.686 Hunk #1 succeeded at 60 (offset 1 line). 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:54.686 20:22:43 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:54.686 patching file lib/pcapng/rte_pcapng.c 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:54.686 20:22:43 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:59.963 The Meson build system 00:01:59.963 Version: 1.3.1 00:01:59.963 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:59.963 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:59.963 Build type: native build 00:01:59.963 Program cat found: YES (/usr/bin/cat) 00:01:59.963 Project name: DPDK 00:01:59.963 Project version: 23.11.0 00:01:59.963 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.963 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:59.963 Host machine cpu family: x86_64 00:01:59.963 Host machine cpu: x86_64 00:01:59.963 Message: ## Building in Developer Mode ## 00:01:59.963 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.963 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:59.963 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.963 Program python3 found: YES (/usr/bin/python3) 00:01:59.963 Program cat found: YES (/usr/bin/cat) 00:01:59.963 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:59.963 Compiler for C supports arguments -march=native: YES 00:01:59.963 Checking for size of "void *" : 8 00:01:59.963 Checking for size of "void *" : 8 (cached) 00:01:59.963 Library m found: YES 00:01:59.963 Library numa found: YES 00:01:59.963 Has header "numaif.h" : YES 00:01:59.963 Library fdt found: NO 00:01:59.963 Library execinfo found: NO 00:01:59.963 Has header "execinfo.h" : YES 00:01:59.963 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.963 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.963 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.963 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.963 Run-time dependency openssl found: YES 3.0.9 00:01:59.963 Run-time dependency libpcap found: YES 1.10.4 00:01:59.963 Has header "pcap.h" with dependency libpcap: YES 00:01:59.963 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.963 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.963 Compiler for C supports arguments -Wformat: YES 00:01:59.963 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.963 Compiler for C supports arguments -Wformat-security: NO 00:01:59.963 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.963 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.963 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.963 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.963 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.963 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.963 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.963 Compiler for C supports arguments -Wundef: YES 00:01:59.963 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.963 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.963 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.963 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.963 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.963 Program objdump found: YES (/usr/bin/objdump) 00:01:59.963 Compiler for C supports arguments -mavx512f: YES 00:01:59.963 Checking if "AVX512 checking" compiles: YES 00:01:59.963 Fetching value of define "__SSE4_2__" : 1 00:01:59.963 Fetching value of define "__AES__" : 1 00:01:59.963 Fetching value of define "__AVX__" : 1 00:01:59.963 Fetching value of define "__AVX2__" : 1 00:01:59.963 Fetching value of define "__AVX512BW__" : 1 00:01:59.963 Fetching value of define "__AVX512CD__" : 1 00:01:59.963 Fetching value of define "__AVX512DQ__" : 1 00:01:59.963 Fetching value of define "__AVX512F__" : 1 00:01:59.963 Fetching value of define "__AVX512VL__" : 1 00:01:59.963 Fetching value of define "__PCLMUL__" : 1 00:01:59.963 Fetching value of define "__RDRND__" : 1 00:01:59.963 Fetching value of define "__RDSEED__" : 1 00:01:59.963 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.963 Fetching value of define "__znver1__" : (undefined) 00:01:59.963 Fetching value of define "__znver2__" : (undefined) 00:01:59.963 Fetching value of define "__znver3__" : (undefined) 00:01:59.963 Fetching value of define "__znver4__" : (undefined) 00:01:59.963 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.963 Message: lib/log: Defining dependency "log" 00:01:59.963 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.963 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.963 Checking for function "getentropy" : NO 00:01:59.963 Message: lib/eal: Defining dependency "eal" 00:01:59.963 Message: lib/ring: Defining dependency "ring" 00:01:59.963 Message: lib/rcu: Defining dependency "rcu" 00:01:59.963 Message: lib/mempool: Defining dependency "mempool" 00:01:59.963 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.963 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.963 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.963 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.963 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.963 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.963 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:59.964 Compiler for C supports arguments -mpclmul: YES 00:01:59.964 Compiler for C supports arguments -maes: YES 00:01:59.964 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.964 Compiler for C supports arguments -mavx512bw: YES 00:01:59.964 Compiler for C supports arguments -mavx512dq: YES 00:01:59.964 Compiler for C supports arguments -mavx512vl: YES 00:01:59.964 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.964 Compiler for C supports arguments -mavx2: YES 00:01:59.964 Compiler for C supports arguments -mavx: YES 00:01:59.964 Message: lib/net: Defining dependency "net" 00:01:59.964 Message: lib/meter: Defining dependency "meter" 00:01:59.964 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.964 Message: lib/pci: Defining dependency "pci" 00:01:59.964 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.964 Message: lib/metrics: Defining dependency "metrics" 00:01:59.964 Message: lib/hash: Defining dependency "hash" 00:01:59.964 Message: lib/timer: Defining dependency "timer" 00:01:59.964 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.964 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.964 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:59.964 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.964 Message: lib/acl: Defining dependency "acl" 00:01:59.964 Message: lib/bbdev: Defining dependency "bbdev" 00:01:59.964 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:59.964 Run-time dependency libelf found: YES 0.190 00:01:59.964 Message: lib/bpf: Defining dependency "bpf" 00:01:59.964 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:59.964 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.964 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.964 Message: lib/distributor: Defining dependency "distributor" 00:01:59.964 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.964 Message: lib/efd: Defining dependency "efd" 00:01:59.964 Message: lib/eventdev: Defining dependency "eventdev" 00:01:59.964 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:59.964 Message: lib/gpudev: Defining dependency "gpudev" 00:01:59.964 Message: lib/gro: Defining dependency "gro" 00:01:59.964 Message: lib/gso: Defining dependency "gso" 00:01:59.964 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:59.964 Message: lib/jobstats: Defining dependency "jobstats" 00:01:59.964 Message: lib/latencystats: Defining dependency "latencystats" 00:01:59.964 Message: lib/lpm: Defining dependency "lpm" 00:01:59.964 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.964 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.964 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:59.964 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:59.964 Message: lib/member: Defining dependency "member" 00:01:59.964 Message: lib/pcapng: Defining dependency "pcapng" 00:01:59.964 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.964 Message: lib/power: Defining dependency "power" 00:01:59.964 Message: lib/rawdev: Defining dependency "rawdev" 00:01:59.964 Message: lib/regexdev: Defining dependency "regexdev" 00:01:59.964 Message: lib/mldev: Defining dependency "mldev" 00:01:59.964 Message: lib/rib: Defining dependency "rib" 00:01:59.964 Message: lib/reorder: Defining dependency "reorder" 00:01:59.964 Message: lib/sched: Defining dependency "sched" 00:01:59.964 Message: lib/security: Defining dependency "security" 00:01:59.964 Message: lib/stack: Defining dependency "stack" 00:01:59.964 Has header "linux/userfaultfd.h" : YES 00:01:59.964 Has header "linux/vduse.h" : YES 00:01:59.964 Message: lib/vhost: Defining dependency "vhost" 00:01:59.964 Message: lib/ipsec: Defining dependency "ipsec" 00:01:59.964 Message: lib/pdcp: Defining dependency "pdcp" 00:01:59.964 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.964 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.964 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.964 Message: lib/fib: Defining dependency "fib" 00:01:59.964 Message: lib/port: Defining dependency "port" 00:01:59.964 Message: lib/pdump: Defining dependency "pdump" 00:01:59.964 Message: lib/table: Defining dependency "table" 00:01:59.964 Message: lib/pipeline: Defining dependency "pipeline" 00:01:59.964 Message: lib/graph: Defining dependency "graph" 00:01:59.964 Message: lib/node: Defining dependency "node" 00:01:59.964 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.533 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.533 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.533 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.533 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:00.533 Compiler for C supports arguments -Wno-unused-value: YES 00:02:00.533 Compiler for C supports arguments -Wno-format: YES 00:02:00.533 Compiler for C supports arguments -Wno-format-security: YES 00:02:00.533 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:00.533 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:00.533 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:00.533 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:00.533 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.533 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.533 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.533 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:00.533 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:00.533 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:00.533 Has header "sys/epoll.h" : YES 00:02:00.533 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.533 Configuring doxy-api-html.conf using configuration 00:02:00.533 Configuring doxy-api-man.conf using configuration 00:02:00.533 Program mandb found: YES (/usr/bin/mandb) 00:02:00.533 Program sphinx-build found: NO 00:02:00.534 Configuring rte_build_config.h using configuration 00:02:00.534 Message: 00:02:00.534 ================= 00:02:00.534 Applications Enabled 00:02:00.534 ================= 00:02:00.534 00:02:00.534 apps: 00:02:00.534 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:00.534 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:00.534 test-pmd, test-regex, test-sad, test-security-perf, 00:02:00.534 00:02:00.534 Message: 00:02:00.534 ================= 00:02:00.534 Libraries Enabled 00:02:00.534 ================= 00:02:00.534 00:02:00.534 libs: 00:02:00.534 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.534 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:00.534 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:00.534 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:00.534 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:00.534 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:00.534 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:00.534 00:02:00.534 00:02:00.534 Message: 00:02:00.534 =============== 00:02:00.534 Drivers Enabled 00:02:00.534 =============== 00:02:00.534 00:02:00.534 common: 00:02:00.534 00:02:00.534 bus: 00:02:00.534 pci, vdev, 00:02:00.534 mempool: 00:02:00.534 ring, 00:02:00.534 dma: 00:02:00.534 00:02:00.534 net: 00:02:00.534 i40e, 00:02:00.534 raw: 00:02:00.534 00:02:00.534 crypto: 00:02:00.534 00:02:00.534 compress: 00:02:00.534 00:02:00.534 regex: 00:02:00.534 00:02:00.534 ml: 00:02:00.534 00:02:00.534 vdpa: 00:02:00.534 00:02:00.534 event: 00:02:00.534 00:02:00.534 baseband: 00:02:00.534 00:02:00.534 gpu: 00:02:00.534 00:02:00.534 00:02:00.534 Message: 00:02:00.534 ================= 00:02:00.534 Content Skipped 00:02:00.534 ================= 00:02:00.534 00:02:00.534 apps: 00:02:00.534 00:02:00.534 libs: 00:02:00.534 00:02:00.534 drivers: 00:02:00.534 common/cpt: not in enabled drivers build config 00:02:00.534 common/dpaax: not in enabled drivers build config 00:02:00.534 common/iavf: not in enabled drivers build config 00:02:00.534 common/idpf: not in enabled drivers build config 00:02:00.534 common/mvep: not in enabled drivers build config 00:02:00.534 common/octeontx: not in enabled drivers build config 00:02:00.534 bus/auxiliary: not in enabled drivers build config 00:02:00.534 bus/cdx: not in enabled drivers build config 00:02:00.534 bus/dpaa: not in enabled drivers build config 00:02:00.534 bus/fslmc: not in enabled drivers build config 00:02:00.534 bus/ifpga: not in enabled drivers build config 00:02:00.534 bus/platform: not in enabled drivers build config 00:02:00.534 bus/vmbus: not in enabled drivers build config 00:02:00.534 common/cnxk: not in enabled drivers build config 00:02:00.534 common/mlx5: not in enabled drivers build config 00:02:00.534 common/nfp: not in enabled drivers build config 00:02:00.534 common/qat: not in enabled drivers build config 00:02:00.534 common/sfc_efx: not in enabled drivers build config 00:02:00.534 mempool/bucket: not in enabled drivers build config 00:02:00.534 mempool/cnxk: not in enabled drivers build config 00:02:00.534 mempool/dpaa: not in enabled drivers build config 00:02:00.534 mempool/dpaa2: not in enabled drivers build config 00:02:00.534 mempool/octeontx: not in enabled drivers build config 00:02:00.534 mempool/stack: not in enabled drivers build config 00:02:00.534 dma/cnxk: not in enabled drivers build config 00:02:00.534 dma/dpaa: not in enabled drivers build config 00:02:00.534 dma/dpaa2: not in enabled drivers build config 00:02:00.534 dma/hisilicon: not in enabled drivers build config 00:02:00.534 dma/idxd: not in enabled drivers build config 00:02:00.534 dma/ioat: not in enabled drivers build config 00:02:00.534 dma/skeleton: not in enabled drivers build config 00:02:00.534 net/af_packet: not in enabled drivers build config 00:02:00.534 net/af_xdp: not in enabled drivers build config 00:02:00.534 net/ark: not in enabled drivers build config 00:02:00.534 net/atlantic: not in enabled drivers build config 00:02:00.534 net/avp: not in enabled drivers build config 00:02:00.534 net/axgbe: not in enabled drivers build config 00:02:00.534 net/bnx2x: not in enabled drivers build config 00:02:00.534 net/bnxt: not in enabled drivers build config 00:02:00.534 net/bonding: not in enabled drivers build config 00:02:00.534 net/cnxk: not in enabled drivers build config 00:02:00.534 net/cpfl: not in enabled drivers build config 00:02:00.534 net/cxgbe: not in enabled drivers build config 00:02:00.534 net/dpaa: not in enabled drivers build config 00:02:00.534 net/dpaa2: not in enabled drivers build config 00:02:00.534 net/e1000: not in enabled drivers build config 00:02:00.534 net/ena: not in enabled drivers build config 00:02:00.534 net/enetc: not in enabled drivers build config 00:02:00.534 net/enetfec: not in enabled drivers build config 00:02:00.534 net/enic: not in enabled drivers build config 00:02:00.534 net/failsafe: not in enabled drivers build config 00:02:00.534 net/fm10k: not in enabled drivers build config 00:02:00.534 net/gve: not in enabled drivers build config 00:02:00.534 net/hinic: not in enabled drivers build config 00:02:00.534 net/hns3: not in enabled drivers build config 00:02:00.534 net/iavf: not in enabled drivers build config 00:02:00.534 net/ice: not in enabled drivers build config 00:02:00.534 net/idpf: not in enabled drivers build config 00:02:00.534 net/igc: not in enabled drivers build config 00:02:00.534 net/ionic: not in enabled drivers build config 00:02:00.534 net/ipn3ke: not in enabled drivers build config 00:02:00.534 net/ixgbe: not in enabled drivers build config 00:02:00.534 net/mana: not in enabled drivers build config 00:02:00.534 net/memif: not in enabled drivers build config 00:02:00.534 net/mlx4: not in enabled drivers build config 00:02:00.534 net/mlx5: not in enabled drivers build config 00:02:00.534 net/mvneta: not in enabled drivers build config 00:02:00.534 net/mvpp2: not in enabled drivers build config 00:02:00.534 net/netvsc: not in enabled drivers build config 00:02:00.534 net/nfb: not in enabled drivers build config 00:02:00.534 net/nfp: not in enabled drivers build config 00:02:00.534 net/ngbe: not in enabled drivers build config 00:02:00.534 net/null: not in enabled drivers build config 00:02:00.534 net/octeontx: not in enabled drivers build config 00:02:00.534 net/octeon_ep: not in enabled drivers build config 00:02:00.534 net/pcap: not in enabled drivers build config 00:02:00.534 net/pfe: not in enabled drivers build config 00:02:00.534 net/qede: not in enabled drivers build config 00:02:00.534 net/ring: not in enabled drivers build config 00:02:00.534 net/sfc: not in enabled drivers build config 00:02:00.534 net/softnic: not in enabled drivers build config 00:02:00.534 net/tap: not in enabled drivers build config 00:02:00.534 net/thunderx: not in enabled drivers build config 00:02:00.534 net/txgbe: not in enabled drivers build config 00:02:00.534 net/vdev_netvsc: not in enabled drivers build config 00:02:00.534 net/vhost: not in enabled drivers build config 00:02:00.534 net/virtio: not in enabled drivers build config 00:02:00.534 net/vmxnet3: not in enabled drivers build config 00:02:00.534 raw/cnxk_bphy: not in enabled drivers build config 00:02:00.534 raw/cnxk_gpio: not in enabled drivers build config 00:02:00.534 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:00.534 raw/ifpga: not in enabled drivers build config 00:02:00.534 raw/ntb: not in enabled drivers build config 00:02:00.534 raw/skeleton: not in enabled drivers build config 00:02:00.534 crypto/armv8: not in enabled drivers build config 00:02:00.534 crypto/bcmfs: not in enabled drivers build config 00:02:00.534 crypto/caam_jr: not in enabled drivers build config 00:02:00.534 crypto/ccp: not in enabled drivers build config 00:02:00.534 crypto/cnxk: not in enabled drivers build config 00:02:00.534 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.534 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.534 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.534 crypto/mlx5: not in enabled drivers build config 00:02:00.534 crypto/mvsam: not in enabled drivers build config 00:02:00.534 crypto/nitrox: not in enabled drivers build config 00:02:00.534 crypto/null: not in enabled drivers build config 00:02:00.535 crypto/octeontx: not in enabled drivers build config 00:02:00.535 crypto/openssl: not in enabled drivers build config 00:02:00.535 crypto/scheduler: not in enabled drivers build config 00:02:00.535 crypto/uadk: not in enabled drivers build config 00:02:00.535 crypto/virtio: not in enabled drivers build config 00:02:00.535 compress/isal: not in enabled drivers build config 00:02:00.535 compress/mlx5: not in enabled drivers build config 00:02:00.535 compress/octeontx: not in enabled drivers build config 00:02:00.535 compress/zlib: not in enabled drivers build config 00:02:00.535 regex/mlx5: not in enabled drivers build config 00:02:00.535 regex/cn9k: not in enabled drivers build config 00:02:00.535 ml/cnxk: not in enabled drivers build config 00:02:00.535 vdpa/ifc: not in enabled drivers build config 00:02:00.535 vdpa/mlx5: not in enabled drivers build config 00:02:00.535 vdpa/nfp: not in enabled drivers build config 00:02:00.535 vdpa/sfc: not in enabled drivers build config 00:02:00.535 event/cnxk: not in enabled drivers build config 00:02:00.535 event/dlb2: not in enabled drivers build config 00:02:00.535 event/dpaa: not in enabled drivers build config 00:02:00.535 event/dpaa2: not in enabled drivers build config 00:02:00.535 event/dsw: not in enabled drivers build config 00:02:00.535 event/opdl: not in enabled drivers build config 00:02:00.535 event/skeleton: not in enabled drivers build config 00:02:00.535 event/sw: not in enabled drivers build config 00:02:00.535 event/octeontx: not in enabled drivers build config 00:02:00.535 baseband/acc: not in enabled drivers build config 00:02:00.535 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:00.535 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:00.535 baseband/la12xx: not in enabled drivers build config 00:02:00.535 baseband/null: not in enabled drivers build config 00:02:00.535 baseband/turbo_sw: not in enabled drivers build config 00:02:00.535 gpu/cuda: not in enabled drivers build config 00:02:00.535 00:02:00.535 00:02:00.535 Build targets in project: 217 00:02:00.535 00:02:00.535 DPDK 23.11.0 00:02:00.535 00:02:00.535 User defined options 00:02:00.535 libdir : lib 00:02:00.535 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:00.535 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:00.535 c_link_args : 00:02:00.535 enable_docs : false 00:02:00.535 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:00.535 enable_kmods : false 00:02:00.535 machine : native 00:02:00.535 tests : false 00:02:00.535 00:02:00.535 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.535 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:00.535 20:22:49 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:02:00.800 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:00.800 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.800 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.800 [3/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.800 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.800 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.800 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.800 [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:01.063 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.063 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.063 [10/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:01.063 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.063 [12/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.063 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.063 [14/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:01.063 [15/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:01.063 [16/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.063 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.063 [18/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.063 [19/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.063 [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.063 [21/707] Linking static target lib/librte_kvargs.a 00:02:01.063 [22/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.063 [23/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.063 [24/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.063 [25/707] Linking static target lib/librte_pci.a 00:02:01.063 [26/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:01.063 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.063 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:01.063 [29/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.063 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.063 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.063 [32/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:01.063 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.063 [34/707] Linking static target lib/librte_log.a 00:02:01.063 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.327 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.327 [37/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:01.327 [38/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.327 [39/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.327 [40/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.589 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:01.589 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.589 [43/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:01.589 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.589 [45/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:01.589 [46/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:01.589 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.589 [48/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:01.589 [49/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:01.589 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.589 [51/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:01.589 [52/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:01.589 [53/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.590 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.590 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.590 [56/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:01.590 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.590 [58/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.590 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:01.590 [60/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.590 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.590 [62/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.590 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.590 [64/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.590 [65/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.590 [66/707] Linking static target lib/librte_meter.a 00:02:01.590 [67/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.590 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:01.590 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:01.590 [70/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.590 [71/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.590 [72/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.590 [73/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:01.590 [74/707] Linking static target lib/librte_ring.a 00:02:01.590 [75/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.590 [76/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.590 [77/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.590 [78/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.590 [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:01.590 [80/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:01.590 [81/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.590 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.590 [83/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.590 [84/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:01.590 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.590 [86/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.590 [87/707] Linking static target lib/librte_cmdline.a 00:02:01.590 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.590 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:01.590 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:01.590 [91/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.590 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.590 [93/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.590 [94/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.849 [95/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:01.849 [96/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:01.849 [97/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.849 [98/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:01.849 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.849 [100/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:01.849 [101/707] Linking static target lib/librte_metrics.a 00:02:01.849 [102/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:01.849 [103/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.849 [104/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.849 [105/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.849 [106/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.849 [107/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.849 [108/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:01.849 [109/707] Linking static target lib/librte_net.a 00:02:01.849 [110/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.849 [111/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:01.849 [112/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.849 [113/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:01.849 [114/707] Linking static target lib/librte_bitratestats.a 00:02:01.849 [115/707] Linking static target lib/librte_cfgfile.a 00:02:01.849 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.849 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.849 [118/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.849 [119/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.849 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.849 [121/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:01.849 [122/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.849 [123/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:01.849 [124/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.849 [125/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.110 [126/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.110 [127/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.110 [128/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:02.110 [129/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.110 [130/707] Linking target lib/librte_log.so.24.0 00:02:02.110 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.110 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.110 [133/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.110 [134/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.110 [135/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:02.110 [136/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:02.110 [137/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:02.110 [138/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.110 [139/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:02.110 [140/707] Linking static target lib/librte_timer.a 00:02:02.110 [141/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:02.110 [142/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:02.110 [143/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:02.110 [144/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:02.110 [145/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.110 [146/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.110 [147/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:02.110 [148/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:02.110 [149/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.110 [150/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:02.110 [151/707] Linking static target lib/librte_bbdev.a 00:02:02.110 [152/707] Linking static target lib/librte_mempool.a 00:02:02.110 [153/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:02.110 [154/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:02.110 [155/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.379 [156/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:02.379 [157/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:02.379 [158/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.379 [159/707] Linking target lib/librte_kvargs.so.24.0 00:02:02.379 [160/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:02.379 [161/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:02.379 [162/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:02.379 [163/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:02.379 [164/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:02.379 [165/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:02.379 [166/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.379 [167/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:02.379 [168/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:02.379 [169/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:02.379 [170/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:02.379 [171/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.379 [172/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:02.379 [173/707] Linking static target lib/librte_jobstats.a 00:02:02.379 [174/707] Linking static target lib/librte_compressdev.a 00:02:02.379 [175/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:02.379 [176/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:02.379 [177/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:02.379 [178/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:02.379 [179/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:02.380 [180/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:02.380 [181/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:02.380 [182/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.380 [183/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:02.380 [184/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:02.380 [185/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:02.380 [186/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:02.643 [187/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:02.643 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:02.643 [189/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:02.643 [190/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:02.643 [191/707] Linking static target lib/librte_dispatcher.a 00:02:02.643 [192/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:02.643 [193/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:02.643 [194/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:02.643 [195/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:02.643 [196/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:02.643 [197/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:02.643 [198/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:02.643 [199/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:02.643 [200/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:02.643 [201/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:02.643 [202/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:02.643 [203/707] Linking static target lib/librte_latencystats.a 00:02:02.643 [204/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:02.643 [205/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.643 [206/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:02.644 [207/707] Linking static target lib/librte_telemetry.a 00:02:02.644 [208/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:02.644 [209/707] Linking static target lib/librte_gpudev.a 00:02:02.644 [210/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:02.644 [211/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:02.644 [212/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:02.644 [213/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:02.644 [214/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:02.644 [215/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:02.644 [216/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:02.644 [217/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.644 [218/707] Linking static target lib/librte_eal.a 00:02:02.644 [219/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:02.644 [220/707] Linking static target lib/librte_gro.a 00:02:02.644 [221/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:02.644 [222/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.644 [223/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:02.644 [224/707] Linking static target lib/librte_rcu.a 00:02:02.644 [225/707] Linking static target lib/librte_stack.a 00:02:02.644 [226/707] Linking static target lib/librte_dmadev.a 00:02:02.644 [227/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:02.644 [228/707] Linking static target lib/librte_distributor.a 00:02:02.644 [229/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:02.644 [230/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:02.644 [231/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:02.644 [232/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:02.644 [233/707] Linking static target lib/librte_regexdev.a 00:02:02.644 [234/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:02.918 [235/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.918 [236/707] Linking static target lib/librte_gso.a 00:02:02.918 [237/707] Linking static target lib/librte_mbuf.a 00:02:02.918 [238/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:02.918 [239/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:02.918 [240/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:02.918 [241/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:02.918 [242/707] Linking static target lib/librte_rawdev.a 00:02:02.918 [243/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.918 [244/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:02.918 [245/707] Linking static target lib/librte_power.a 00:02:02.918 [246/707] Linking static target lib/librte_mldev.a 00:02:02.918 [247/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:02.918 [248/707] Linking static target lib/librte_ip_frag.a 00:02:02.918 [249/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.918 [250/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:02.918 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:02.918 [252/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:02.918 [253/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:02.918 [254/707] Linking static target lib/librte_pcapng.a 00:02:02.918 [255/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:02.918 [256/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:02.918 [257/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:02.918 [258/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:02.918 [259/707] Linking static target lib/librte_reorder.a 00:02:02.918 [260/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:02.918 [261/707] Linking static target lib/librte_bpf.a 00:02:02.918 [262/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [263/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:03.185 [264/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:03.185 [265/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [266/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:03.185 [267/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.185 [268/707] Linking static target lib/librte_security.a 00:02:03.185 [269/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [270/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:03.185 [271/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:03.185 [272/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [273/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:03.185 [274/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:03.185 [275/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:03.185 [276/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:03.185 [277/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:03.185 [278/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [279/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [280/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [281/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:03.185 [282/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:03.185 [283/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:03.185 [284/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [285/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:03.185 [286/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:03.185 [287/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.185 [288/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.448 [289/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:03.448 [290/707] Linking static target lib/librte_lpm.a 00:02:03.448 [291/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:03.448 [292/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.448 [293/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.448 [294/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:03.448 [295/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:03.448 [296/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:03.448 [297/707] Linking static target lib/librte_rib.a 00:02:03.448 [298/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.448 [299/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:03.448 [300/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:03.448 [301/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.448 [302/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.448 [303/707] Linking target lib/librte_telemetry.so.24.0 00:02:03.448 [304/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.448 [305/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:03.448 [306/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.448 [307/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:03.448 [308/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:03.448 [309/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:03.448 [310/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:03.449 [311/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.449 [312/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:03.449 [313/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:03.449 [314/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:03.449 [315/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:03.715 [316/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.715 [317/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:03.715 [318/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:03.715 [319/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:03.715 [320/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:03.715 [321/707] Linking static target lib/librte_efd.a 00:02:03.715 [322/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:03.715 [323/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:03.715 [324/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:03.715 [325/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:03.715 [326/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:03.715 [327/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:03.715 [328/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:03.715 [329/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:03.715 [330/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:03.716 [331/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:03.716 [332/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:03.716 [333/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.716 [334/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:03.716 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:03.716 [336/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.716 [337/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:03.716 [338/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:03.716 [339/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:03.716 [340/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.716 [341/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:03.983 [342/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:03.983 [343/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.983 [344/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:03.983 [345/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:03.983 [346/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:03.983 [347/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.983 [348/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:03.983 [349/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:03.983 [350/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:03.983 [351/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:03.983 [352/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:03.983 [353/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.983 [354/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:03.983 [355/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:03.983 [356/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:03.983 [357/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.983 [358/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:03.983 [359/707] Linking static target lib/librte_fib.a 00:02:03.983 [360/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:03.983 [361/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.983 [362/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:03.983 [363/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:03.983 [364/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:03.983 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:04.245 [366/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:04.245 [367/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:04.245 [368/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.245 [369/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:04.245 [370/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.245 [371/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.245 [372/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:04.245 [373/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:04.245 [374/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:04.245 [375/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:04.245 [376/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.245 [377/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:04.245 [378/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.245 [379/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.245 [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:04.245 [381/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:04.245 [382/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:04.245 [383/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:04.245 [384/707] Linking static target lib/librte_graph.a 00:02:04.245 [385/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:04.245 [386/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:04.245 [387/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:04.245 [388/707] Linking static target lib/librte_pdump.a 00:02:04.507 [389/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:04.507 [390/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:04.507 [391/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:04.507 [392/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:04.507 [393/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.507 [394/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:04.507 [395/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.507 [396/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:04.507 [397/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:04.507 [398/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:04.507 [399/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:04.507 [400/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:04.507 [401/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:04.507 [402/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:04.507 [403/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:04.507 [404/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:04.507 [405/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:04.507 [406/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.507 [407/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:04.507 [408/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.507 [409/707] Linking static target drivers/librte_bus_vdev.a 00:02:04.507 [410/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:04.507 [411/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.507 [412/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:04.507 [413/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.508 [414/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:04.768 [415/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:04.768 [416/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:04.768 [417/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:04.768 [418/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:04.768 [419/707] Linking static target lib/librte_table.a 00:02:04.768 [420/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:04.768 [421/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:04.768 [422/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:04.768 [423/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:04.768 [424/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:04.768 [425/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:04.768 [426/707] Linking static target lib/librte_sched.a 00:02:04.768 [427/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:04.768 [428/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:04.768 [429/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:04.768 [430/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.768 [431/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.768 [432/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:04.768 [433/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.768 [434/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.768 [435/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.768 [436/707] Linking static target lib/librte_cryptodev.a 00:02:04.768 [437/707] Linking static target drivers/librte_bus_pci.a 00:02:04.769 [438/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:05.028 [439/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:05.028 [440/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:05.028 [441/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:05.028 [442/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:05.028 [443/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:05.028 [444/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:05.028 [445/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:05.028 [446/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:05.028 [447/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:05.028 [448/707] Linking static target lib/librte_ipsec.a 00:02:05.028 [449/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:05.028 [450/707] Linking static target lib/librte_member.a 00:02:05.028 [451/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.028 [452/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:05.028 [453/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:05.028 [454/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.028 [455/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:05.028 [456/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.028 [457/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:05.028 [458/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:05.028 [459/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:05.028 [460/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:05.028 [461/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:05.028 [462/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:05.028 [463/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:05.028 [464/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:05.028 [465/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:05.028 [466/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:05.028 [467/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:05.028 [468/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:05.287 [469/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:05.287 [470/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:05.287 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:05.287 [472/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.287 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:05.287 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:05.287 [475/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:05.287 [476/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:05.287 [477/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:05.287 [478/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:05.287 [479/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:05.287 [480/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:05.287 [481/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:05.287 [482/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:05.287 [483/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:05.287 [484/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:05.287 [485/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:05.287 [486/707] Linking static target lib/librte_node.a 00:02:05.287 [487/707] Linking static target lib/librte_pdcp.a 00:02:05.287 [488/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:05.287 [489/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.287 [490/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:05.287 [491/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:05.287 [492/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.287 [493/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:05.287 [494/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.287 [495/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.287 [496/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:05.287 [497/707] Linking static target lib/librte_hash.a 00:02:05.287 [498/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.287 [499/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.287 [500/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:05.287 [501/707] Linking static target drivers/librte_mempool_ring.a 00:02:05.287 [502/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:05.544 [503/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.545 [504/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:05.545 [505/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:05.545 [506/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.545 [507/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:05.545 [508/707] Linking static target lib/acl/libavx2_tmp.a 00:02:05.545 [509/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:05.545 [510/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:05.545 [511/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:05.545 [512/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:05.545 [513/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:05.545 [514/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:05.545 [515/707] Linking static target lib/librte_port.a 00:02:05.545 [516/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:05.545 [517/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.545 [518/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:05.545 [519/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:05.545 [520/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:05.545 [521/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:05.545 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:05.545 [523/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:05.545 [524/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:05.545 [525/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:05.545 [526/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:05.545 [527/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.545 [528/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:05.545 [529/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.545 [530/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:05.545 [531/707] Linking static target lib/librte_eventdev.a 00:02:05.545 [532/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:05.802 [533/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:05.802 [534/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:05.802 [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:05.802 [536/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.802 [537/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:05.802 [538/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:05.802 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:05.802 [540/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:05.802 [541/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:05.802 [542/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:05.802 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:05.803 [544/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:05.803 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:05.803 [546/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:05.803 [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:05.803 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:05.803 [549/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:05.803 [550/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:05.803 [551/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:05.803 [552/707] Linking static target lib/librte_acl.a 00:02:06.060 [553/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:06.060 [554/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:06.060 [555/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:06.060 [556/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:06.060 [557/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:06.060 [558/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:06.060 [559/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:06.060 [560/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:06.060 [561/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:06.060 [562/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:06.060 [563/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.060 [564/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:06.060 [565/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:06.060 [566/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:06.317 [567/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.317 [568/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.317 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:06.575 [570/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:06.575 [571/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:06.575 [572/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.575 [573/707] Linking static target lib/librte_ethdev.a 00:02:06.575 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:06.832 [575/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.832 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:06.832 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:07.397 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:07.397 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:07.397 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:07.960 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:07.960 [582/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:07.960 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:08.217 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:08.475 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:08.475 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:08.475 [587/707] Linking static target drivers/librte_net_i40e.a 00:02:08.475 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.410 [589/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.410 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:09.410 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.979 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:15.251 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.251 [594/707] Linking target lib/librte_eal.so.24.0 00:02:15.251 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:15.251 [596/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.251 [597/707] Linking target lib/librte_cfgfile.so.24.0 00:02:15.251 [598/707] Linking target lib/librte_stack.so.24.0 00:02:15.251 [599/707] Linking target lib/librte_pci.so.24.0 00:02:15.251 [600/707] Linking target lib/librte_timer.so.24.0 00:02:15.251 [601/707] Linking target lib/librte_ring.so.24.0 00:02:15.251 [602/707] Linking target lib/librte_meter.so.24.0 00:02:15.251 [603/707] Linking target lib/librte_rawdev.so.24.0 00:02:15.251 [604/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:15.252 [605/707] Linking target lib/librte_dmadev.so.24.0 00:02:15.252 [606/707] Linking target lib/librte_jobstats.so.24.0 00:02:15.252 [607/707] Linking target lib/librte_acl.so.24.0 00:02:15.252 [608/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:15.252 [609/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:15.252 [610/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:15.252 [611/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:15.252 [612/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:15.252 [613/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:15.252 [614/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:15.252 [615/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:15.252 [616/707] Linking target lib/librte_mempool.so.24.0 00:02:15.252 [617/707] Linking target lib/librte_rcu.so.24.0 00:02:15.510 [618/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:15.510 [619/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:15.510 [620/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:15.510 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:15.510 [622/707] Linking target lib/librte_rib.so.24.0 00:02:15.510 [623/707] Linking target lib/librte_mbuf.so.24.0 00:02:15.770 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:15.770 [625/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:15.770 [626/707] Linking target lib/librte_gpudev.so.24.0 00:02:15.770 [627/707] Linking target lib/librte_regexdev.so.24.0 00:02:15.770 [628/707] Linking target lib/librte_bbdev.so.24.0 00:02:15.770 [629/707] Linking target lib/librte_compressdev.so.24.0 00:02:15.770 [630/707] Linking target lib/librte_distributor.so.24.0 00:02:15.770 [631/707] Linking target lib/librte_fib.so.24.0 00:02:15.770 [632/707] Linking target lib/librte_net.so.24.0 00:02:15.770 [633/707] Linking target lib/librte_mldev.so.24.0 00:02:15.770 [634/707] Linking target lib/librte_reorder.so.24.0 00:02:15.770 [635/707] Linking target lib/librte_cryptodev.so.24.0 00:02:15.770 [636/707] Linking target lib/librte_sched.so.24.0 00:02:15.770 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:15.770 [638/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:15.770 [639/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:15.770 [640/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:15.770 [641/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:15.770 [642/707] Linking target lib/librte_cmdline.so.24.0 00:02:16.029 [643/707] Linking static target lib/librte_pipeline.a 00:02:16.029 [644/707] Linking target lib/librte_hash.so.24.0 00:02:16.029 [645/707] Linking target lib/librte_ethdev.so.24.0 00:02:16.029 [646/707] Linking target lib/librte_security.so.24.0 00:02:16.029 [647/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:16.029 [648/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:16.029 [649/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:16.029 [650/707] Linking target lib/librte_efd.so.24.0 00:02:16.029 [651/707] Linking target lib/librte_lpm.so.24.0 00:02:16.029 [652/707] Linking target lib/librte_member.so.24.0 00:02:16.029 [653/707] Linking target lib/librte_gro.so.24.0 00:02:16.029 [654/707] Linking target lib/librte_metrics.so.24.0 00:02:16.029 [655/707] Linking target lib/librte_bpf.so.24.0 00:02:16.029 [656/707] Linking target lib/librte_pcapng.so.24.0 00:02:16.029 [657/707] Linking target lib/librte_ipsec.so.24.0 00:02:16.029 [658/707] Linking target lib/librte_gso.so.24.0 00:02:16.029 [659/707] Linking target lib/librte_pdcp.so.24.0 00:02:16.029 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:02:16.029 [661/707] Linking target lib/librte_power.so.24.0 00:02:16.029 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:16.288 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:16.288 [664/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:16.288 [665/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:16.288 [666/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:16.288 [667/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:16.288 [668/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:16.288 [669/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:16.288 [670/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:16.288 [671/707] Linking target lib/librte_latencystats.so.24.0 00:02:16.289 [672/707] Linking target lib/librte_bitratestats.so.24.0 00:02:16.289 [673/707] Linking target lib/librte_graph.so.24.0 00:02:16.289 [674/707] Linking target lib/librte_pdump.so.24.0 00:02:16.289 [675/707] Linking target lib/librte_dispatcher.so.24.0 00:02:16.289 [676/707] Linking target lib/librte_port.so.24.0 00:02:16.289 [677/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.289 [678/707] Linking static target lib/librte_vhost.a 00:02:16.547 [679/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:16.547 [680/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:16.547 [681/707] Linking target lib/librte_node.so.24.0 00:02:16.547 [682/707] Linking target lib/librte_table.so.24.0 00:02:16.804 [683/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:16.804 [684/707] Linking target app/dpdk-pdump 00:02:16.804 [685/707] Linking target app/dpdk-test-gpudev 00:02:16.804 [686/707] Linking target app/dpdk-test-bbdev 00:02:16.805 [687/707] Linking target app/dpdk-test-regex 00:02:16.805 [688/707] Linking target app/dpdk-dumpcap 00:02:16.805 [689/707] Linking target app/dpdk-test-acl 00:02:16.805 [690/707] Linking target app/dpdk-proc-info 00:02:16.805 [691/707] Linking target app/dpdk-test-dma-perf 00:02:16.805 [692/707] Linking target app/dpdk-test-cmdline 00:02:16.805 [693/707] Linking target app/dpdk-test-fib 00:02:16.805 [694/707] Linking target app/dpdk-test-sad 00:02:16.805 [695/707] Linking target app/dpdk-test-compress-perf 00:02:16.805 [696/707] Linking target app/dpdk-test-pipeline 00:02:16.805 [697/707] Linking target app/dpdk-graph 00:02:16.805 [698/707] Linking target app/dpdk-test-crypto-perf 00:02:16.805 [699/707] Linking target app/dpdk-test-eventdev 00:02:16.805 [700/707] Linking target app/dpdk-test-flow-perf 00:02:16.805 [701/707] Linking target app/dpdk-test-mldev 00:02:16.805 [702/707] Linking target app/dpdk-test-security-perf 00:02:17.098 [703/707] Linking target app/dpdk-testpmd 00:02:18.473 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.732 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:22.024 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.024 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:22.024 20:23:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:22.024 20:23:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:22.024 20:23:10 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:22.024 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:22.024 [0/1] Installing files. 00:02:22.024 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:22.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:22.025 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:22.026 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.027 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:22.028 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:22.029 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:22.030 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:22.030 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:22.030 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:22.030 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:22.030 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:22.030 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.030 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.031 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.031 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:22.031 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.031 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:22.031 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.031 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:22.031 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.031 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:22.031 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.031 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.294 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.295 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.296 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.297 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.298 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.298 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.298 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.298 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.298 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:22.298 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.298 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:22.298 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:22.298 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:22.298 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so 00:02:22.298 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:22.298 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:22.298 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:22.298 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:22.298 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:22.298 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:22.298 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:22.298 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:22.298 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:22.298 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:22.298 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:22.298 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:22.298 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:22.298 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:22.298 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:22.298 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:22.298 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:22.298 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:22.298 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:22.298 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:22.298 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:22.298 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:22.298 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:22.298 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:22.298 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:22.298 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:22.298 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:22.298 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:22.298 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:22.298 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:22.298 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:22.298 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:22.298 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:22.298 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:22.298 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:22.298 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:22.298 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:22.298 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:22.298 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:22.298 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:22.298 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:22.298 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:22.298 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:22.298 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:22.298 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:22.298 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:22.298 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:22.298 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:22.298 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:22.298 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:22.298 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:22.298 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:22.298 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:22.298 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:22.298 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:22.298 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:22.298 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:22.298 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:22.298 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:22.298 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:22.298 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:22.298 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:22.298 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:22.298 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:22.298 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:22.298 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:22.298 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:22.298 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:22.298 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:22.299 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:22.299 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:22.299 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:22.299 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:22.299 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:22.299 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:22.299 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:22.299 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:22.299 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:22.299 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:22.299 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:22.299 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:22.299 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:22.299 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:22.299 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:22.299 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:22.299 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:22.299 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:22.299 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:22.299 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:22.299 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:22.299 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:22.299 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:22.299 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:22.299 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:22.299 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:22.299 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:22.299 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:22.299 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:22.299 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:22.299 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:22.299 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:22.299 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:22.299 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:22.299 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:22.299 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:22.299 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:22.299 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:22.299 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:22.299 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:22.299 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:22.299 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:22.299 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:22.299 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:22.299 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:22.299 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:22.299 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:22.299 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:22.299 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:22.299 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:22.299 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:22.299 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:22.299 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:22.299 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:22.299 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:22.299 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:22.299 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:22.299 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:22.299 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:22.299 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:22.299 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:22.299 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:22.299 20:23:10 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:22.299 20:23:10 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:22.299 00:02:22.299 real 0m27.767s 00:02:22.299 user 8m3.148s 00:02:22.299 sys 2m38.871s 00:02:22.299 20:23:10 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:22.299 20:23:10 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:22.299 ************************************ 00:02:22.299 END TEST build_native_dpdk 00:02:22.299 ************************************ 00:02:22.299 20:23:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:22.299 20:23:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:22.299 20:23:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:22.299 20:23:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:22.299 20:23:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:22.299 20:23:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:22.299 20:23:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:22.299 20:23:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:22.559 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:22.818 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.818 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.818 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:23.077 Using 'verbs' RDMA provider 00:02:36.226 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:51.113 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:51.113 Creating mk/config.mk...done. 00:02:51.113 Creating mk/cc.flags.mk...done. 00:02:51.113 Type 'make' to build. 00:02:51.113 20:23:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:51.113 20:23:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:51.113 20:23:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:51.113 20:23:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.113 ************************************ 00:02:51.113 START TEST make 00:02:51.113 ************************************ 00:02:51.113 20:23:38 make -- common/autotest_common.sh@1125 -- $ make -j112 00:02:51.113 make[1]: Nothing to be done for 'all'. 00:03:01.149 CC lib/ut_mock/mock.o 00:03:01.149 CC lib/ut/ut.o 00:03:01.149 CC lib/log/log.o 00:03:01.149 CC lib/log/log_flags.o 00:03:01.149 CC lib/log/log_deprecated.o 00:03:01.149 LIB libspdk_ut_mock.a 00:03:01.149 LIB libspdk_ut.a 00:03:01.149 SO libspdk_ut_mock.so.6.0 00:03:01.149 LIB libspdk_log.a 00:03:01.149 SO libspdk_ut.so.2.0 00:03:01.149 SO libspdk_log.so.7.0 00:03:01.149 SYMLINK libspdk_ut_mock.so 00:03:01.149 SYMLINK libspdk_ut.so 00:03:01.149 SYMLINK libspdk_log.so 00:03:01.407 CC lib/ioat/ioat.o 00:03:01.407 CXX lib/trace_parser/trace.o 00:03:01.407 CC lib/dma/dma.o 00:03:01.407 CC lib/util/base64.o 00:03:01.407 CC lib/util/bit_array.o 00:03:01.407 CC lib/util/crc32.o 00:03:01.407 CC lib/util/cpuset.o 00:03:01.407 CC lib/util/crc16.o 00:03:01.407 CC lib/util/crc32c.o 00:03:01.407 CC lib/util/crc32_ieee.o 00:03:01.407 CC lib/util/fd.o 00:03:01.407 CC lib/util/crc64.o 00:03:01.407 CC lib/util/dif.o 00:03:01.407 CC lib/util/hexlify.o 00:03:01.407 CC lib/util/fd_group.o 00:03:01.407 CC lib/util/file.o 00:03:01.407 CC lib/util/net.o 00:03:01.407 CC lib/util/iov.o 00:03:01.407 CC lib/util/math.o 00:03:01.407 CC lib/util/pipe.o 00:03:01.407 CC lib/util/strerror_tls.o 00:03:01.407 CC lib/util/string.o 00:03:01.407 CC lib/util/uuid.o 00:03:01.407 CC lib/util/xor.o 00:03:01.407 CC lib/util/zipf.o 00:03:01.665 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.665 CC lib/vfio_user/host/vfio_user.o 00:03:01.665 LIB libspdk_dma.a 00:03:01.665 LIB libspdk_ioat.a 00:03:01.665 SO libspdk_dma.so.4.0 00:03:01.665 SO libspdk_ioat.so.7.0 00:03:01.665 SYMLINK libspdk_dma.so 00:03:01.665 SYMLINK libspdk_ioat.so 00:03:01.665 LIB libspdk_vfio_user.a 00:03:01.924 LIB libspdk_util.a 00:03:01.924 SO libspdk_vfio_user.so.5.0 00:03:01.924 SYMLINK libspdk_vfio_user.so 00:03:01.924 SO libspdk_util.so.10.0 00:03:01.924 SYMLINK libspdk_util.so 00:03:02.182 LIB libspdk_trace_parser.a 00:03:02.182 SO libspdk_trace_parser.so.5.0 00:03:02.182 SYMLINK libspdk_trace_parser.so 00:03:02.440 CC lib/rdma_utils/rdma_utils.o 00:03:02.440 CC lib/json/json_parse.o 00:03:02.440 CC lib/json/json_util.o 00:03:02.440 CC lib/json/json_write.o 00:03:02.440 CC lib/idxd/idxd.o 00:03:02.440 CC lib/idxd/idxd_kernel.o 00:03:02.440 CC lib/vmd/vmd.o 00:03:02.440 CC lib/idxd/idxd_user.o 00:03:02.440 CC lib/vmd/led.o 00:03:02.440 CC lib/conf/conf.o 00:03:02.440 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:02.440 CC lib/rdma_provider/common.o 00:03:02.440 CC lib/env_dpdk/env.o 00:03:02.440 CC lib/env_dpdk/pci.o 00:03:02.440 CC lib/env_dpdk/memory.o 00:03:02.440 CC lib/env_dpdk/init.o 00:03:02.440 CC lib/env_dpdk/threads.o 00:03:02.440 CC lib/env_dpdk/pci_ioat.o 00:03:02.440 CC lib/env_dpdk/pci_vmd.o 00:03:02.440 CC lib/env_dpdk/pci_idxd.o 00:03:02.440 CC lib/env_dpdk/pci_virtio.o 00:03:02.440 CC lib/env_dpdk/pci_event.o 00:03:02.440 CC lib/env_dpdk/sigbus_handler.o 00:03:02.440 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.440 CC lib/env_dpdk/pci_dpdk.o 00:03:02.440 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.698 LIB libspdk_rdma_provider.a 00:03:02.698 LIB libspdk_conf.a 00:03:02.698 LIB libspdk_rdma_utils.a 00:03:02.698 SO libspdk_rdma_provider.so.6.0 00:03:02.698 LIB libspdk_json.a 00:03:02.698 SO libspdk_conf.so.6.0 00:03:02.698 SO libspdk_rdma_utils.so.1.0 00:03:02.698 SO libspdk_json.so.6.0 00:03:02.698 SYMLINK libspdk_rdma_provider.so 00:03:02.698 SYMLINK libspdk_conf.so 00:03:02.698 SYMLINK libspdk_rdma_utils.so 00:03:02.698 SYMLINK libspdk_json.so 00:03:02.956 LIB libspdk_idxd.a 00:03:02.956 SO libspdk_idxd.so.12.0 00:03:02.956 LIB libspdk_vmd.a 00:03:02.956 SO libspdk_vmd.so.6.0 00:03:02.956 SYMLINK libspdk_idxd.so 00:03:02.956 SYMLINK libspdk_vmd.so 00:03:03.215 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.215 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.215 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.215 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.473 LIB libspdk_jsonrpc.a 00:03:03.473 LIB libspdk_env_dpdk.a 00:03:03.473 SO libspdk_jsonrpc.so.6.0 00:03:03.473 SO libspdk_env_dpdk.so.15.0 00:03:03.473 SYMLINK libspdk_jsonrpc.so 00:03:03.731 SYMLINK libspdk_env_dpdk.so 00:03:03.731 CC lib/rpc/rpc.o 00:03:03.990 LIB libspdk_rpc.a 00:03:03.990 SO libspdk_rpc.so.6.0 00:03:03.990 SYMLINK libspdk_rpc.so 00:03:04.557 CC lib/notify/notify.o 00:03:04.557 CC lib/notify/notify_rpc.o 00:03:04.557 CC lib/trace/trace.o 00:03:04.557 CC lib/trace/trace_flags.o 00:03:04.557 CC lib/trace/trace_rpc.o 00:03:04.557 CC lib/keyring/keyring_rpc.o 00:03:04.557 CC lib/keyring/keyring.o 00:03:04.557 LIB libspdk_notify.a 00:03:04.557 SO libspdk_notify.so.6.0 00:03:04.557 LIB libspdk_keyring.a 00:03:04.557 LIB libspdk_trace.a 00:03:04.815 SO libspdk_keyring.so.1.0 00:03:04.815 SYMLINK libspdk_notify.so 00:03:04.815 SO libspdk_trace.so.10.0 00:03:04.815 SYMLINK libspdk_keyring.so 00:03:04.815 SYMLINK libspdk_trace.so 00:03:05.074 CC lib/thread/thread.o 00:03:05.074 CC lib/thread/iobuf.o 00:03:05.074 CC lib/sock/sock.o 00:03:05.074 CC lib/sock/sock_rpc.o 00:03:05.332 LIB libspdk_sock.a 00:03:05.591 SO libspdk_sock.so.10.0 00:03:05.591 SYMLINK libspdk_sock.so 00:03:05.849 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:05.849 CC lib/nvme/nvme_ctrlr.o 00:03:05.849 CC lib/nvme/nvme_fabric.o 00:03:05.849 CC lib/nvme/nvme_ns_cmd.o 00:03:05.849 CC lib/nvme/nvme_ns.o 00:03:05.849 CC lib/nvme/nvme_pcie_common.o 00:03:05.849 CC lib/nvme/nvme_pcie.o 00:03:05.849 CC lib/nvme/nvme_qpair.o 00:03:05.849 CC lib/nvme/nvme.o 00:03:05.849 CC lib/nvme/nvme_quirks.o 00:03:05.849 CC lib/nvme/nvme_transport.o 00:03:05.849 CC lib/nvme/nvme_discovery.o 00:03:05.849 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.849 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.849 CC lib/nvme/nvme_tcp.o 00:03:05.849 CC lib/nvme/nvme_poll_group.o 00:03:05.849 CC lib/nvme/nvme_opal.o 00:03:05.849 CC lib/nvme/nvme_io_msg.o 00:03:05.849 CC lib/nvme/nvme_zns.o 00:03:05.849 CC lib/nvme/nvme_stubs.o 00:03:05.849 CC lib/nvme/nvme_auth.o 00:03:05.849 CC lib/nvme/nvme_cuse.o 00:03:05.849 CC lib/nvme/nvme_rdma.o 00:03:06.107 LIB libspdk_thread.a 00:03:06.107 SO libspdk_thread.so.10.1 00:03:06.365 SYMLINK libspdk_thread.so 00:03:06.624 CC lib/blob/blobstore.o 00:03:06.624 CC lib/accel/accel.o 00:03:06.624 CC lib/accel/accel_sw.o 00:03:06.624 CC lib/accel/accel_rpc.o 00:03:06.624 CC lib/blob/request.o 00:03:06.624 CC lib/blob/zeroes.o 00:03:06.624 CC lib/blob/blob_bs_dev.o 00:03:06.624 CC lib/virtio/virtio.o 00:03:06.624 CC lib/virtio/virtio_vfio_user.o 00:03:06.624 CC lib/init/json_config.o 00:03:06.624 CC lib/virtio/virtio_vhost_user.o 00:03:06.624 CC lib/init/subsystem.o 00:03:06.624 CC lib/init/subsystem_rpc.o 00:03:06.624 CC lib/init/rpc.o 00:03:06.624 CC lib/virtio/virtio_pci.o 00:03:06.882 LIB libspdk_init.a 00:03:06.882 SO libspdk_init.so.5.0 00:03:06.882 LIB libspdk_virtio.a 00:03:06.882 SO libspdk_virtio.so.7.0 00:03:06.882 SYMLINK libspdk_init.so 00:03:07.141 SYMLINK libspdk_virtio.so 00:03:07.400 CC lib/event/app.o 00:03:07.400 CC lib/event/reactor.o 00:03:07.400 CC lib/event/app_rpc.o 00:03:07.400 CC lib/event/log_rpc.o 00:03:07.400 CC lib/event/scheduler_static.o 00:03:07.400 LIB libspdk_accel.a 00:03:07.400 SO libspdk_accel.so.16.0 00:03:07.400 SYMLINK libspdk_accel.so 00:03:07.400 LIB libspdk_nvme.a 00:03:07.658 SO libspdk_nvme.so.13.1 00:03:07.658 LIB libspdk_event.a 00:03:07.658 SO libspdk_event.so.14.0 00:03:07.658 SYMLINK libspdk_event.so 00:03:07.658 CC lib/bdev/bdev.o 00:03:07.658 CC lib/bdev/bdev_rpc.o 00:03:07.658 CC lib/bdev/part.o 00:03:07.658 CC lib/bdev/bdev_zone.o 00:03:07.658 CC lib/bdev/scsi_nvme.o 00:03:07.917 SYMLINK libspdk_nvme.so 00:03:08.852 LIB libspdk_blob.a 00:03:08.852 SO libspdk_blob.so.11.0 00:03:08.852 SYMLINK libspdk_blob.so 00:03:09.111 CC lib/lvol/lvol.o 00:03:09.111 CC lib/blobfs/blobfs.o 00:03:09.111 CC lib/blobfs/tree.o 00:03:09.678 LIB libspdk_bdev.a 00:03:09.678 SO libspdk_bdev.so.16.0 00:03:09.678 SYMLINK libspdk_bdev.so 00:03:09.678 LIB libspdk_blobfs.a 00:03:09.678 LIB libspdk_lvol.a 00:03:09.678 SO libspdk_blobfs.so.10.0 00:03:09.937 SO libspdk_lvol.so.10.0 00:03:09.937 SYMLINK libspdk_blobfs.so 00:03:09.937 SYMLINK libspdk_lvol.so 00:03:09.937 CC lib/ublk/ublk.o 00:03:09.937 CC lib/nvmf/ctrlr.o 00:03:09.937 CC lib/ublk/ublk_rpc.o 00:03:09.937 CC lib/nvmf/ctrlr_discovery.o 00:03:09.937 CC lib/nvmf/nvmf.o 00:03:09.937 CC lib/nvmf/ctrlr_bdev.o 00:03:09.937 CC lib/nvmf/subsystem.o 00:03:09.937 CC lib/nvmf/nvmf_rpc.o 00:03:09.937 CC lib/nvmf/stubs.o 00:03:09.937 CC lib/nvmf/transport.o 00:03:09.937 CC lib/nvmf/tcp.o 00:03:09.937 CC lib/nvmf/mdns_server.o 00:03:09.937 CC lib/nvmf/rdma.o 00:03:09.937 CC lib/nvmf/auth.o 00:03:09.937 CC lib/scsi/lun.o 00:03:09.937 CC lib/scsi/dev.o 00:03:09.937 CC lib/scsi/port.o 00:03:09.937 CC lib/nbd/nbd_rpc.o 00:03:09.937 CC lib/nbd/nbd.o 00:03:09.937 CC lib/scsi/scsi.o 00:03:09.937 CC lib/scsi/scsi_bdev.o 00:03:09.937 CC lib/scsi/scsi_pr.o 00:03:09.937 CC lib/scsi/scsi_rpc.o 00:03:09.937 CC lib/scsi/task.o 00:03:09.937 CC lib/ftl/ftl_core.o 00:03:09.937 CC lib/ftl/ftl_init.o 00:03:09.937 CC lib/ftl/ftl_layout.o 00:03:09.937 CC lib/ftl/ftl_debug.o 00:03:09.937 CC lib/ftl/ftl_io.o 00:03:09.937 CC lib/ftl/ftl_sb.o 00:03:09.937 CC lib/ftl/ftl_l2p.o 00:03:09.937 CC lib/ftl/ftl_band.o 00:03:09.937 CC lib/ftl/ftl_l2p_flat.o 00:03:09.937 CC lib/ftl/ftl_nv_cache.o 00:03:09.937 CC lib/ftl/ftl_band_ops.o 00:03:09.937 CC lib/ftl/ftl_writer.o 00:03:09.937 CC lib/ftl/ftl_rq.o 00:03:09.937 CC lib/ftl/ftl_reloc.o 00:03:09.937 CC lib/ftl/ftl_l2p_cache.o 00:03:09.937 CC lib/ftl/ftl_p2l.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.196 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:10.196 CC lib/ftl/utils/ftl_md.o 00:03:10.196 CC lib/ftl/utils/ftl_conf.o 00:03:10.196 CC lib/ftl/utils/ftl_mempool.o 00:03:10.196 CC lib/ftl/utils/ftl_bitmap.o 00:03:10.196 CC lib/ftl/utils/ftl_property.o 00:03:10.196 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:10.196 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:10.196 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:10.196 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:10.196 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:10.196 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:10.196 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:10.196 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:10.196 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:10.196 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:10.196 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:10.196 CC lib/ftl/base/ftl_base_dev.o 00:03:10.196 CC lib/ftl/base/ftl_base_bdev.o 00:03:10.196 CC lib/ftl/ftl_trace.o 00:03:10.763 LIB libspdk_nbd.a 00:03:10.763 LIB libspdk_ublk.a 00:03:10.763 SO libspdk_nbd.so.7.0 00:03:10.763 LIB libspdk_scsi.a 00:03:10.763 SO libspdk_ublk.so.3.0 00:03:10.763 SYMLINK libspdk_nbd.so 00:03:10.763 SO libspdk_scsi.so.9.0 00:03:10.763 SYMLINK libspdk_ublk.so 00:03:10.763 SYMLINK libspdk_scsi.so 00:03:11.022 LIB libspdk_ftl.a 00:03:11.022 SO libspdk_ftl.so.9.0 00:03:11.280 CC lib/iscsi/conn.o 00:03:11.280 CC lib/iscsi/init_grp.o 00:03:11.280 CC lib/iscsi/iscsi.o 00:03:11.280 CC lib/iscsi/md5.o 00:03:11.280 CC lib/iscsi/param.o 00:03:11.280 CC lib/vhost/vhost.o 00:03:11.280 CC lib/iscsi/portal_grp.o 00:03:11.280 CC lib/iscsi/iscsi_subsystem.o 00:03:11.280 CC lib/vhost/vhost_rpc.o 00:03:11.280 CC lib/iscsi/iscsi_rpc.o 00:03:11.280 CC lib/iscsi/tgt_node.o 00:03:11.280 CC lib/vhost/vhost_scsi.o 00:03:11.280 CC lib/vhost/vhost_blk.o 00:03:11.280 CC lib/iscsi/task.o 00:03:11.280 CC lib/vhost/rte_vhost_user.o 00:03:11.538 SYMLINK libspdk_ftl.so 00:03:11.538 LIB libspdk_nvmf.a 00:03:11.847 SO libspdk_nvmf.so.19.0 00:03:11.847 SYMLINK libspdk_nvmf.so 00:03:12.124 LIB libspdk_vhost.a 00:03:12.124 SO libspdk_vhost.so.8.0 00:03:12.124 SYMLINK libspdk_vhost.so 00:03:12.124 LIB libspdk_iscsi.a 00:03:12.382 SO libspdk_iscsi.so.8.0 00:03:12.382 SYMLINK libspdk_iscsi.so 00:03:12.948 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.948 LIB libspdk_env_dpdk_rpc.a 00:03:13.207 CC module/keyring/file/keyring_rpc.o 00:03:13.207 CC module/keyring/file/keyring.o 00:03:13.207 CC module/accel/error/accel_error.o 00:03:13.207 CC module/keyring/linux/keyring.o 00:03:13.207 CC module/keyring/linux/keyring_rpc.o 00:03:13.207 CC module/accel/error/accel_error_rpc.o 00:03:13.207 CC module/accel/dsa/accel_dsa.o 00:03:13.207 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.207 CC module/sock/posix/posix.o 00:03:13.207 CC module/accel/iaa/accel_iaa.o 00:03:13.207 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.207 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.207 CC module/accel/ioat/accel_ioat.o 00:03:13.207 SO libspdk_env_dpdk_rpc.so.6.0 00:03:13.207 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.207 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.207 CC module/blob/bdev/blob_bdev.o 00:03:13.207 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.207 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.207 LIB libspdk_keyring_file.a 00:03:13.207 LIB libspdk_keyring_linux.a 00:03:13.207 LIB libspdk_scheduler_dpdk_governor.a 00:03:13.207 LIB libspdk_scheduler_gscheduler.a 00:03:13.207 LIB libspdk_accel_error.a 00:03:13.207 LIB libspdk_accel_iaa.a 00:03:13.207 LIB libspdk_scheduler_dynamic.a 00:03:13.207 SO libspdk_keyring_file.so.1.0 00:03:13.207 SO libspdk_keyring_linux.so.1.0 00:03:13.207 LIB libspdk_accel_ioat.a 00:03:13.207 SO libspdk_scheduler_gscheduler.so.4.0 00:03:13.465 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:13.465 SO libspdk_accel_error.so.2.0 00:03:13.465 SO libspdk_scheduler_dynamic.so.4.0 00:03:13.465 SO libspdk_accel_iaa.so.3.0 00:03:13.465 SO libspdk_accel_ioat.so.6.0 00:03:13.465 LIB libspdk_accel_dsa.a 00:03:13.465 SYMLINK libspdk_keyring_linux.so 00:03:13.465 SYMLINK libspdk_scheduler_gscheduler.so 00:03:13.465 LIB libspdk_blob_bdev.a 00:03:13.465 SYMLINK libspdk_keyring_file.so 00:03:13.465 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:13.465 SO libspdk_accel_dsa.so.5.0 00:03:13.465 SYMLINK libspdk_scheduler_dynamic.so 00:03:13.465 SYMLINK libspdk_accel_error.so 00:03:13.465 SYMLINK libspdk_accel_iaa.so 00:03:13.465 SYMLINK libspdk_accel_ioat.so 00:03:13.465 SO libspdk_blob_bdev.so.11.0 00:03:13.465 SYMLINK libspdk_accel_dsa.so 00:03:13.465 SYMLINK libspdk_blob_bdev.so 00:03:13.723 LIB libspdk_sock_posix.a 00:03:13.724 SO libspdk_sock_posix.so.6.0 00:03:13.724 SYMLINK libspdk_sock_posix.so 00:03:13.982 CC module/bdev/gpt/gpt.o 00:03:13.982 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.982 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.982 CC module/bdev/error/vbdev_error.o 00:03:13.982 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.982 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.982 CC module/bdev/malloc/bdev_malloc.o 00:03:13.982 CC module/bdev/raid/bdev_raid_rpc.o 00:03:13.982 CC module/bdev/null/bdev_null_rpc.o 00:03:13.982 CC module/bdev/raid/bdev_raid.o 00:03:13.982 CC module/bdev/nvme/nvme_rpc.o 00:03:13.982 CC module/bdev/null/bdev_null.o 00:03:13.982 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:13.982 CC module/bdev/nvme/bdev_mdns_client.o 00:03:13.982 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.982 CC module/bdev/nvme/bdev_nvme.o 00:03:13.982 CC module/bdev/raid/raid1.o 00:03:13.982 CC module/bdev/raid/bdev_raid_sb.o 00:03:13.982 CC module/bdev/nvme/vbdev_opal.o 00:03:13.982 CC module/bdev/raid/raid0.o 00:03:13.982 CC module/bdev/split/vbdev_split.o 00:03:13.982 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:13.982 CC module/bdev/raid/concat.o 00:03:13.982 CC module/bdev/split/vbdev_split_rpc.o 00:03:13.982 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:13.982 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.982 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:13.982 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.982 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.982 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.982 CC module/bdev/delay/vbdev_delay.o 00:03:13.982 CC module/bdev/iscsi/bdev_iscsi.o 00:03:13.982 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:13.982 CC module/bdev/ftl/bdev_ftl.o 00:03:13.982 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:13.982 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:13.982 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:13.982 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:13.982 CC module/bdev/aio/bdev_aio.o 00:03:13.982 CC module/bdev/aio/bdev_aio_rpc.o 00:03:13.982 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:13.982 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.241 LIB libspdk_blobfs_bdev.a 00:03:14.241 LIB libspdk_bdev_split.a 00:03:14.241 LIB libspdk_bdev_null.a 00:03:14.241 LIB libspdk_bdev_gpt.a 00:03:14.241 SO libspdk_blobfs_bdev.so.6.0 00:03:14.241 LIB libspdk_bdev_error.a 00:03:14.241 SO libspdk_bdev_split.so.6.0 00:03:14.241 LIB libspdk_bdev_passthru.a 00:03:14.499 SO libspdk_bdev_null.so.6.0 00:03:14.499 SO libspdk_bdev_error.so.6.0 00:03:14.499 SO libspdk_bdev_gpt.so.6.0 00:03:14.499 SYMLINK libspdk_blobfs_bdev.so 00:03:14.499 LIB libspdk_bdev_ftl.a 00:03:14.499 SO libspdk_bdev_passthru.so.6.0 00:03:14.499 LIB libspdk_bdev_malloc.a 00:03:14.499 LIB libspdk_bdev_aio.a 00:03:14.499 SYMLINK libspdk_bdev_split.so 00:03:14.499 LIB libspdk_bdev_zone_block.a 00:03:14.499 SYMLINK libspdk_bdev_null.so 00:03:14.499 LIB libspdk_bdev_iscsi.a 00:03:14.499 SYMLINK libspdk_bdev_gpt.so 00:03:14.499 SYMLINK libspdk_bdev_error.so 00:03:14.499 SO libspdk_bdev_malloc.so.6.0 00:03:14.499 SO libspdk_bdev_ftl.so.6.0 00:03:14.499 LIB libspdk_bdev_delay.a 00:03:14.499 SYMLINK libspdk_bdev_passthru.so 00:03:14.499 SO libspdk_bdev_aio.so.6.0 00:03:14.499 SO libspdk_bdev_zone_block.so.6.0 00:03:14.499 SO libspdk_bdev_delay.so.6.0 00:03:14.499 SO libspdk_bdev_iscsi.so.6.0 00:03:14.499 SYMLINK libspdk_bdev_malloc.so 00:03:14.499 SYMLINK libspdk_bdev_ftl.so 00:03:14.499 LIB libspdk_bdev_lvol.a 00:03:14.499 SYMLINK libspdk_bdev_aio.so 00:03:14.499 SYMLINK libspdk_bdev_iscsi.so 00:03:14.499 SYMLINK libspdk_bdev_zone_block.so 00:03:14.499 SYMLINK libspdk_bdev_delay.so 00:03:14.499 LIB libspdk_bdev_virtio.a 00:03:14.499 SO libspdk_bdev_lvol.so.6.0 00:03:14.499 SO libspdk_bdev_virtio.so.6.0 00:03:14.757 SYMLINK libspdk_bdev_lvol.so 00:03:14.757 SYMLINK libspdk_bdev_virtio.so 00:03:14.757 LIB libspdk_bdev_raid.a 00:03:15.015 SO libspdk_bdev_raid.so.6.0 00:03:15.015 SYMLINK libspdk_bdev_raid.so 00:03:15.582 LIB libspdk_bdev_nvme.a 00:03:15.840 SO libspdk_bdev_nvme.so.7.0 00:03:15.840 SYMLINK libspdk_bdev_nvme.so 00:03:16.408 CC module/event/subsystems/vmd/vmd.o 00:03:16.667 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.667 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.667 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.667 CC module/event/subsystems/sock/sock.o 00:03:16.667 CC module/event/subsystems/keyring/keyring.o 00:03:16.667 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.667 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.667 LIB libspdk_event_vmd.a 00:03:16.667 LIB libspdk_event_iobuf.a 00:03:16.667 LIB libspdk_event_keyring.a 00:03:16.667 SO libspdk_event_vmd.so.6.0 00:03:16.667 LIB libspdk_event_vhost_blk.a 00:03:16.667 LIB libspdk_event_sock.a 00:03:16.667 LIB libspdk_event_scheduler.a 00:03:16.667 SO libspdk_event_iobuf.so.3.0 00:03:16.667 SO libspdk_event_vhost_blk.so.3.0 00:03:16.667 SO libspdk_event_keyring.so.1.0 00:03:16.667 SO libspdk_event_sock.so.5.0 00:03:16.667 SO libspdk_event_scheduler.so.4.0 00:03:16.667 SYMLINK libspdk_event_vmd.so 00:03:16.926 SYMLINK libspdk_event_iobuf.so 00:03:16.926 SYMLINK libspdk_event_vhost_blk.so 00:03:16.926 SYMLINK libspdk_event_keyring.so 00:03:16.926 SYMLINK libspdk_event_sock.so 00:03:16.926 SYMLINK libspdk_event_scheduler.so 00:03:17.184 CC module/event/subsystems/accel/accel.o 00:03:17.443 LIB libspdk_event_accel.a 00:03:17.443 SO libspdk_event_accel.so.6.0 00:03:17.443 SYMLINK libspdk_event_accel.so 00:03:17.702 CC module/event/subsystems/bdev/bdev.o 00:03:17.961 LIB libspdk_event_bdev.a 00:03:17.961 SO libspdk_event_bdev.so.6.0 00:03:17.961 SYMLINK libspdk_event_bdev.so 00:03:18.528 CC module/event/subsystems/scsi/scsi.o 00:03:18.528 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:18.528 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:18.528 CC module/event/subsystems/nbd/nbd.o 00:03:18.528 CC module/event/subsystems/ublk/ublk.o 00:03:18.528 LIB libspdk_event_ublk.a 00:03:18.528 LIB libspdk_event_scsi.a 00:03:18.528 LIB libspdk_event_nbd.a 00:03:18.528 SO libspdk_event_ublk.so.3.0 00:03:18.528 SO libspdk_event_scsi.so.6.0 00:03:18.528 SO libspdk_event_nbd.so.6.0 00:03:18.528 LIB libspdk_event_nvmf.a 00:03:18.787 SYMLINK libspdk_event_ublk.so 00:03:18.787 SYMLINK libspdk_event_scsi.so 00:03:18.787 SO libspdk_event_nvmf.so.6.0 00:03:18.787 SYMLINK libspdk_event_nbd.so 00:03:18.787 SYMLINK libspdk_event_nvmf.so 00:03:19.046 CC module/event/subsystems/iscsi/iscsi.o 00:03:19.046 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:19.305 LIB libspdk_event_iscsi.a 00:03:19.305 LIB libspdk_event_vhost_scsi.a 00:03:19.305 SO libspdk_event_iscsi.so.6.0 00:03:19.305 SO libspdk_event_vhost_scsi.so.3.0 00:03:19.305 SYMLINK libspdk_event_iscsi.so 00:03:19.305 SYMLINK libspdk_event_vhost_scsi.so 00:03:19.564 SO libspdk.so.6.0 00:03:19.564 SYMLINK libspdk.so 00:03:19.824 CC test/rpc_client/rpc_client_test.o 00:03:19.824 CC app/trace_record/trace_record.o 00:03:19.824 TEST_HEADER include/spdk/accel.h 00:03:19.824 TEST_HEADER include/spdk/accel_module.h 00:03:19.824 TEST_HEADER include/spdk/assert.h 00:03:19.824 TEST_HEADER include/spdk/bdev.h 00:03:19.824 TEST_HEADER include/spdk/barrier.h 00:03:19.824 TEST_HEADER include/spdk/base64.h 00:03:19.824 TEST_HEADER include/spdk/bit_array.h 00:03:19.824 TEST_HEADER include/spdk/bdev_module.h 00:03:19.824 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.824 CXX app/trace/trace.o 00:03:19.824 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.824 TEST_HEADER include/spdk/bit_pool.h 00:03:19.824 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.824 CC app/spdk_top/spdk_top.o 00:03:19.824 TEST_HEADER include/spdk/blob.h 00:03:19.824 TEST_HEADER include/spdk/blobfs.h 00:03:19.824 CC app/spdk_nvme_perf/perf.o 00:03:19.824 TEST_HEADER include/spdk/conf.h 00:03:19.824 TEST_HEADER include/spdk/config.h 00:03:19.824 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.824 CC app/spdk_nvme_identify/identify.o 00:03:19.824 CC app/spdk_lspci/spdk_lspci.o 00:03:19.824 TEST_HEADER include/spdk/cpuset.h 00:03:19.824 TEST_HEADER include/spdk/crc16.h 00:03:19.824 TEST_HEADER include/spdk/crc64.h 00:03:19.824 TEST_HEADER include/spdk/crc32.h 00:03:19.824 TEST_HEADER include/spdk/dif.h 00:03:19.824 TEST_HEADER include/spdk/endian.h 00:03:19.824 TEST_HEADER include/spdk/dma.h 00:03:19.824 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.824 TEST_HEADER include/spdk/event.h 00:03:19.824 TEST_HEADER include/spdk/fd_group.h 00:03:19.824 TEST_HEADER include/spdk/fd.h 00:03:19.824 TEST_HEADER include/spdk/env.h 00:03:19.824 TEST_HEADER include/spdk/file.h 00:03:19.824 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.824 TEST_HEADER include/spdk/ftl.h 00:03:19.824 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.824 TEST_HEADER include/spdk/hexlify.h 00:03:19.824 TEST_HEADER include/spdk/histogram_data.h 00:03:19.824 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.824 TEST_HEADER include/spdk/idxd.h 00:03:19.824 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.824 TEST_HEADER include/spdk/ioat.h 00:03:19.824 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.824 TEST_HEADER include/spdk/init.h 00:03:19.824 TEST_HEADER include/spdk/json.h 00:03:19.824 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.824 TEST_HEADER include/spdk/keyring.h 00:03:19.824 TEST_HEADER include/spdk/keyring_module.h 00:03:19.824 TEST_HEADER include/spdk/likely.h 00:03:19.824 TEST_HEADER include/spdk/log.h 00:03:19.824 TEST_HEADER include/spdk/lvol.h 00:03:19.824 TEST_HEADER include/spdk/memory.h 00:03:19.824 TEST_HEADER include/spdk/nbd.h 00:03:19.824 CC app/nvmf_tgt/nvmf_main.o 00:03:19.824 TEST_HEADER include/spdk/net.h 00:03:19.824 TEST_HEADER include/spdk/mmio.h 00:03:19.824 TEST_HEADER include/spdk/nvme.h 00:03:19.824 TEST_HEADER include/spdk/notify.h 00:03:19.824 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.824 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.824 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.824 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.824 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.824 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.824 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.824 CC app/spdk_dd/spdk_dd.o 00:03:19.824 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.824 TEST_HEADER include/spdk/nvmf.h 00:03:19.824 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.824 TEST_HEADER include/spdk/opal.h 00:03:19.824 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.824 TEST_HEADER include/spdk/opal_spec.h 00:03:19.824 TEST_HEADER include/spdk/pci_ids.h 00:03:19.824 TEST_HEADER include/spdk/pipe.h 00:03:19.824 TEST_HEADER include/spdk/reduce.h 00:03:19.824 TEST_HEADER include/spdk/queue.h 00:03:19.824 TEST_HEADER include/spdk/rpc.h 00:03:19.824 CC app/spdk_tgt/spdk_tgt.o 00:03:19.824 TEST_HEADER include/spdk/scsi.h 00:03:19.824 TEST_HEADER include/spdk/scheduler.h 00:03:19.824 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.824 TEST_HEADER include/spdk/sock.h 00:03:19.824 TEST_HEADER include/spdk/stdinc.h 00:03:19.824 TEST_HEADER include/spdk/string.h 00:03:19.824 TEST_HEADER include/spdk/trace_parser.h 00:03:19.824 TEST_HEADER include/spdk/thread.h 00:03:19.824 TEST_HEADER include/spdk/trace.h 00:03:19.824 TEST_HEADER include/spdk/tree.h 00:03:19.824 TEST_HEADER include/spdk/util.h 00:03:19.824 TEST_HEADER include/spdk/ublk.h 00:03:19.824 TEST_HEADER include/spdk/uuid.h 00:03:19.824 TEST_HEADER include/spdk/version.h 00:03:19.824 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.824 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.824 TEST_HEADER include/spdk/vmd.h 00:03:19.824 TEST_HEADER include/spdk/xor.h 00:03:19.824 TEST_HEADER include/spdk/vhost.h 00:03:19.824 CXX test/cpp_headers/accel.o 00:03:19.824 TEST_HEADER include/spdk/zipf.h 00:03:19.824 CXX test/cpp_headers/accel_module.o 00:03:19.824 CXX test/cpp_headers/barrier.o 00:03:19.824 CXX test/cpp_headers/assert.o 00:03:19.824 CXX test/cpp_headers/base64.o 00:03:19.824 CXX test/cpp_headers/bdev.o 00:03:19.824 CXX test/cpp_headers/bit_array.o 00:03:19.824 CXX test/cpp_headers/bdev_zone.o 00:03:19.825 CXX test/cpp_headers/bdev_module.o 00:03:19.825 CXX test/cpp_headers/blob_bdev.o 00:03:19.825 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.825 CXX test/cpp_headers/bit_pool.o 00:03:19.825 CXX test/cpp_headers/blob.o 00:03:19.825 CXX test/cpp_headers/conf.o 00:03:19.825 CXX test/cpp_headers/config.o 00:03:19.825 CXX test/cpp_headers/blobfs.o 00:03:19.825 CXX test/cpp_headers/cpuset.o 00:03:19.825 CXX test/cpp_headers/crc32.o 00:03:19.825 CXX test/cpp_headers/dif.o 00:03:19.825 CXX test/cpp_headers/crc64.o 00:03:20.099 CXX test/cpp_headers/crc16.o 00:03:20.099 CXX test/cpp_headers/dma.o 00:03:20.099 CXX test/cpp_headers/env_dpdk.o 00:03:20.099 CXX test/cpp_headers/env.o 00:03:20.099 CXX test/cpp_headers/endian.o 00:03:20.099 CXX test/cpp_headers/event.o 00:03:20.099 CXX test/cpp_headers/fd_group.o 00:03:20.099 CXX test/cpp_headers/fd.o 00:03:20.099 CXX test/cpp_headers/file.o 00:03:20.099 CXX test/cpp_headers/gpt_spec.o 00:03:20.099 CXX test/cpp_headers/ftl.o 00:03:20.099 CXX test/cpp_headers/hexlify.o 00:03:20.099 CXX test/cpp_headers/histogram_data.o 00:03:20.099 CXX test/cpp_headers/init.o 00:03:20.099 CXX test/cpp_headers/ioat.o 00:03:20.099 CXX test/cpp_headers/idxd.o 00:03:20.099 CXX test/cpp_headers/idxd_spec.o 00:03:20.099 CXX test/cpp_headers/ioat_spec.o 00:03:20.099 CXX test/cpp_headers/json.o 00:03:20.099 CXX test/cpp_headers/iscsi_spec.o 00:03:20.099 CXX test/cpp_headers/jsonrpc.o 00:03:20.099 CXX test/cpp_headers/keyring.o 00:03:20.099 CXX test/cpp_headers/keyring_module.o 00:03:20.099 CXX test/cpp_headers/likely.o 00:03:20.099 CXX test/cpp_headers/log.o 00:03:20.099 CXX test/cpp_headers/memory.o 00:03:20.099 CXX test/cpp_headers/lvol.o 00:03:20.099 CXX test/cpp_headers/nbd.o 00:03:20.099 CXX test/cpp_headers/mmio.o 00:03:20.099 CXX test/cpp_headers/notify.o 00:03:20.099 CXX test/cpp_headers/net.o 00:03:20.099 CXX test/cpp_headers/nvme.o 00:03:20.099 CXX test/cpp_headers/nvme_intel.o 00:03:20.099 CXX test/cpp_headers/nvme_ocssd.o 00:03:20.099 CXX test/cpp_headers/nvme_zns.o 00:03:20.099 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:20.099 CXX test/cpp_headers/nvme_spec.o 00:03:20.099 CXX test/cpp_headers/nvmf_cmd.o 00:03:20.099 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.099 CXX test/cpp_headers/nvmf.o 00:03:20.099 CXX test/cpp_headers/nvmf_spec.o 00:03:20.099 CC examples/util/zipf/zipf.o 00:03:20.099 CXX test/cpp_headers/nvmf_transport.o 00:03:20.099 CXX test/cpp_headers/opal_spec.o 00:03:20.099 CXX test/cpp_headers/opal.o 00:03:20.099 CXX test/cpp_headers/pci_ids.o 00:03:20.099 CXX test/cpp_headers/pipe.o 00:03:20.099 CXX test/cpp_headers/queue.o 00:03:20.099 CXX test/cpp_headers/reduce.o 00:03:20.099 CC test/thread/poller_perf/poller_perf.o 00:03:20.099 CXX test/cpp_headers/rpc.o 00:03:20.099 CXX test/cpp_headers/scheduler.o 00:03:20.099 CXX test/cpp_headers/scsi.o 00:03:20.099 CXX test/cpp_headers/scsi_spec.o 00:03:20.099 CXX test/cpp_headers/sock.o 00:03:20.099 CXX test/cpp_headers/stdinc.o 00:03:20.099 CXX test/cpp_headers/string.o 00:03:20.099 CXX test/cpp_headers/thread.o 00:03:20.099 CXX test/cpp_headers/trace.o 00:03:20.099 CC examples/ioat/perf/perf.o 00:03:20.099 CXX test/cpp_headers/trace_parser.o 00:03:20.099 CC examples/ioat/verify/verify.o 00:03:20.099 CC test/env/vtophys/vtophys.o 00:03:20.099 CXX test/cpp_headers/tree.o 00:03:20.099 CXX test/cpp_headers/ublk.o 00:03:20.099 CXX test/cpp_headers/util.o 00:03:20.099 CC test/app/jsoncat/jsoncat.o 00:03:20.099 CC test/env/pci/pci_ut.o 00:03:20.099 CC test/app/stub/stub.o 00:03:20.099 CC test/dma/test_dma/test_dma.o 00:03:20.099 CC test/app/histogram_perf/histogram_perf.o 00:03:20.099 CXX test/cpp_headers/uuid.o 00:03:20.099 CC test/env/memory/memory_ut.o 00:03:20.099 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.099 CXX test/cpp_headers/version.o 00:03:20.099 CC app/fio/nvme/fio_plugin.o 00:03:20.099 CXX test/cpp_headers/vfio_user_pci.o 00:03:20.099 CC app/fio/bdev/fio_plugin.o 00:03:20.099 CC test/app/bdev_svc/bdev_svc.o 00:03:20.383 CXX test/cpp_headers/vfio_user_spec.o 00:03:20.383 LINK spdk_lspci 00:03:20.383 LINK rpc_client_test 00:03:20.657 LINK interrupt_tgt 00:03:20.657 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.657 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.657 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.657 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.657 LINK nvmf_tgt 00:03:20.657 LINK spdk_nvme_discover 00:03:20.657 LINK iscsi_tgt 00:03:20.657 LINK spdk_tgt 00:03:20.657 LINK vtophys 00:03:20.657 LINK spdk_trace_record 00:03:20.657 LINK zipf 00:03:20.657 LINK poller_perf 00:03:20.916 LINK jsoncat 00:03:20.916 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.916 CXX test/cpp_headers/vhost.o 00:03:20.916 LINK env_dpdk_post_init 00:03:20.916 CXX test/cpp_headers/vmd.o 00:03:20.916 CXX test/cpp_headers/xor.o 00:03:20.916 CXX test/cpp_headers/zipf.o 00:03:20.916 LINK histogram_perf 00:03:20.916 LINK stub 00:03:20.916 LINK verify 00:03:20.916 LINK ioat_perf 00:03:20.916 LINK bdev_svc 00:03:20.916 LINK spdk_dd 00:03:20.916 LINK test_dma 00:03:20.916 LINK spdk_trace 00:03:20.916 LINK pci_ut 00:03:21.175 LINK nvme_fuzz 00:03:21.175 LINK spdk_bdev 00:03:21.175 LINK vhost_fuzz 00:03:21.175 LINK spdk_nvme 00:03:21.175 LINK spdk_nvme_perf 00:03:21.175 LINK spdk_nvme_identify 00:03:21.175 LINK spdk_top 00:03:21.175 CC test/event/event_perf/event_perf.o 00:03:21.175 CC test/event/reactor/reactor.o 00:03:21.175 CC test/event/reactor_perf/reactor_perf.o 00:03:21.435 LINK mem_callbacks 00:03:21.435 CC test/event/app_repeat/app_repeat.o 00:03:21.435 CC test/event/scheduler/scheduler.o 00:03:21.435 CC examples/idxd/perf/perf.o 00:03:21.435 CC examples/vmd/lsvmd/lsvmd.o 00:03:21.435 CC examples/sock/hello_world/hello_sock.o 00:03:21.435 CC examples/vmd/led/led.o 00:03:21.435 CC app/vhost/vhost.o 00:03:21.435 CC examples/thread/thread/thread_ex.o 00:03:21.435 LINK reactor 00:03:21.435 LINK reactor_perf 00:03:21.435 LINK event_perf 00:03:21.435 LINK app_repeat 00:03:21.435 CC test/nvme/aer/aer.o 00:03:21.435 CC test/nvme/compliance/nvme_compliance.o 00:03:21.435 CC test/nvme/reserve/reserve.o 00:03:21.435 CC test/nvme/fused_ordering/fused_ordering.o 00:03:21.435 CC test/nvme/e2edp/nvme_dp.o 00:03:21.435 CC test/nvme/cuse/cuse.o 00:03:21.435 CC test/nvme/reset/reset.o 00:03:21.435 CC test/nvme/sgl/sgl.o 00:03:21.435 CC test/nvme/overhead/overhead.o 00:03:21.435 CC test/nvme/boot_partition/boot_partition.o 00:03:21.435 LINK lsvmd 00:03:21.435 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.435 CC test/nvme/fdp/fdp.o 00:03:21.435 CC test/nvme/connect_stress/connect_stress.o 00:03:21.694 CC test/nvme/simple_copy/simple_copy.o 00:03:21.694 CC test/blobfs/mkfs/mkfs.o 00:03:21.694 CC test/nvme/startup/startup.o 00:03:21.694 CC test/accel/dif/dif.o 00:03:21.694 LINK led 00:03:21.694 CC test/nvme/err_injection/err_injection.o 00:03:21.694 LINK scheduler 00:03:21.694 LINK memory_ut 00:03:21.694 LINK vhost 00:03:21.694 LINK hello_sock 00:03:21.694 LINK thread 00:03:21.694 LINK idxd_perf 00:03:21.694 CC test/lvol/esnap/esnap.o 00:03:21.694 LINK startup 00:03:21.694 LINK boot_partition 00:03:21.694 LINK connect_stress 00:03:21.694 LINK doorbell_aers 00:03:21.694 LINK mkfs 00:03:21.694 LINK err_injection 00:03:21.694 LINK fused_ordering 00:03:21.694 LINK reserve 00:03:21.694 LINK simple_copy 00:03:21.694 LINK sgl 00:03:21.694 LINK reset 00:03:21.694 LINK nvme_dp 00:03:21.953 LINK aer 00:03:21.953 LINK nvme_compliance 00:03:21.953 LINK overhead 00:03:21.953 LINK fdp 00:03:21.953 LINK dif 00:03:21.953 LINK iscsi_fuzz 00:03:22.211 CC examples/nvme/hotplug/hotplug.o 00:03:22.211 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:22.211 CC examples/nvme/hello_world/hello_world.o 00:03:22.211 CC examples/nvme/reconnect/reconnect.o 00:03:22.211 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:22.211 CC examples/nvme/arbitration/arbitration.o 00:03:22.211 CC examples/nvme/abort/abort.o 00:03:22.211 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:22.211 CC examples/accel/perf/accel_perf.o 00:03:22.211 CC examples/blob/cli/blobcli.o 00:03:22.211 CC examples/blob/hello_world/hello_blob.o 00:03:22.211 LINK pmr_persistence 00:03:22.211 LINK cmb_copy 00:03:22.471 LINK hello_world 00:03:22.471 LINK hotplug 00:03:22.471 LINK arbitration 00:03:22.471 LINK reconnect 00:03:22.471 LINK abort 00:03:22.471 CC test/bdev/bdevio/bdevio.o 00:03:22.471 LINK hello_blob 00:03:22.471 LINK nvme_manage 00:03:22.471 LINK cuse 00:03:22.730 LINK accel_perf 00:03:22.730 LINK blobcli 00:03:22.730 LINK bdevio 00:03:23.299 CC examples/bdev/bdevperf/bdevperf.o 00:03:23.299 CC examples/bdev/hello_world/hello_bdev.o 00:03:23.299 LINK hello_bdev 00:03:23.866 LINK bdevperf 00:03:24.124 CC examples/nvmf/nvmf/nvmf.o 00:03:24.383 LINK nvmf 00:03:25.320 LINK esnap 00:03:25.579 00:03:25.579 real 0m35.159s 00:03:25.579 user 4m53.763s 00:03:25.579 sys 2m56.902s 00:03:25.579 20:24:13 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:25.579 20:24:13 make -- common/autotest_common.sh@10 -- $ set +x 00:03:25.579 ************************************ 00:03:25.579 END TEST make 00:03:25.579 ************************************ 00:03:25.579 20:24:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.579 20:24:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.579 20:24:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.579 20:24:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.579 20:24:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.579 20:24:13 -- pm/common@44 -- $ pid=783202 00:03:25.579 20:24:13 -- pm/common@50 -- $ kill -TERM 783202 00:03:25.579 20:24:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.579 20:24:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.579 20:24:13 -- pm/common@44 -- $ pid=783204 00:03:25.579 20:24:13 -- pm/common@50 -- $ kill -TERM 783204 00:03:25.579 20:24:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.579 20:24:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:25.579 20:24:13 -- pm/common@44 -- $ pid=783206 00:03:25.579 20:24:13 -- pm/common@50 -- $ kill -TERM 783206 00:03:25.579 20:24:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.579 20:24:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:25.580 20:24:13 -- pm/common@44 -- $ pid=783231 00:03:25.580 20:24:13 -- pm/common@50 -- $ sudo -E kill -TERM 783231 00:03:25.580 20:24:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:25.580 20:24:14 -- nvmf/common.sh@7 -- # uname -s 00:03:25.580 20:24:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.580 20:24:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.580 20:24:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.580 20:24:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.580 20:24:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.580 20:24:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.580 20:24:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.580 20:24:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.580 20:24:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.580 20:24:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.580 20:24:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:25.580 20:24:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:25.580 20:24:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.580 20:24:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.580 20:24:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:25.580 20:24:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.580 20:24:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:25.580 20:24:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.580 20:24:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.580 20:24:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.580 20:24:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.580 20:24:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.580 20:24:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.580 20:24:14 -- paths/export.sh@5 -- # export PATH 00:03:25.580 20:24:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.580 20:24:14 -- nvmf/common.sh@47 -- # : 0 00:03:25.580 20:24:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:25.580 20:24:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:25.580 20:24:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.580 20:24:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.580 20:24:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.580 20:24:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:25.580 20:24:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:25.580 20:24:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:25.580 20:24:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.580 20:24:14 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.580 20:24:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.580 20:24:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.580 20:24:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:25.839 20:24:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.839 20:24:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:25.839 20:24:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.839 20:24:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.839 20:24:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.839 20:24:14 -- spdk/autotest.sh@48 -- # udevadm_pid=858090 00:03:25.839 20:24:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.839 20:24:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.839 20:24:14 -- pm/common@17 -- # local monitor 00:03:25.839 20:24:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.839 20:24:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.839 20:24:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.839 20:24:14 -- pm/common@21 -- # date +%s 00:03:25.839 20:24:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.839 20:24:14 -- pm/common@21 -- # date +%s 00:03:25.839 20:24:14 -- pm/common@25 -- # sleep 1 00:03:25.839 20:24:14 -- pm/common@21 -- # date +%s 00:03:25.839 20:24:14 -- pm/common@21 -- # date +%s 00:03:25.839 20:24:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722018254 00:03:25.839 20:24:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722018254 00:03:25.839 20:24:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722018254 00:03:25.839 20:24:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1722018254 00:03:25.839 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722018254_collect-vmstat.pm.log 00:03:25.839 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722018254_collect-cpu-temp.pm.log 00:03:25.839 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722018254_collect-cpu-load.pm.log 00:03:25.839 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1722018254_collect-bmc-pm.bmc.pm.log 00:03:26.779 20:24:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.779 20:24:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.779 20:24:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:26.779 20:24:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.779 20:24:15 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.779 20:24:15 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:26.779 20:24:15 -- common/autotest_common.sh@10 -- # set +x 00:03:26.779 20:24:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:26.779 20:24:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:26.779 20:24:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:26.779 20:24:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:26.779 20:24:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:26.779 20:24:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.779 20:24:15 -- common/autotest_common.sh@1455 -- # uname 00:03:26.779 20:24:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:26.779 20:24:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.779 20:24:15 -- common/autotest_common.sh@1475 -- # uname 00:03:26.779 20:24:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:26.779 20:24:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:26.779 20:24:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:26.779 20:24:15 -- spdk/autotest.sh@72 -- # hash lcov 00:03:26.779 20:24:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:26.779 20:24:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:26.779 --rc lcov_branch_coverage=1 00:03:26.779 --rc lcov_function_coverage=1 00:03:26.779 --rc genhtml_branch_coverage=1 00:03:26.779 --rc genhtml_function_coverage=1 00:03:26.779 --rc genhtml_legend=1 00:03:26.779 --rc geninfo_all_blocks=1 00:03:26.779 ' 00:03:26.779 20:24:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:26.779 --rc lcov_branch_coverage=1 00:03:26.779 --rc lcov_function_coverage=1 00:03:26.779 --rc genhtml_branch_coverage=1 00:03:26.779 --rc genhtml_function_coverage=1 00:03:26.779 --rc genhtml_legend=1 00:03:26.779 --rc geninfo_all_blocks=1 00:03:26.779 ' 00:03:26.779 20:24:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:26.779 --rc lcov_branch_coverage=1 00:03:26.779 --rc lcov_function_coverage=1 00:03:26.779 --rc genhtml_branch_coverage=1 00:03:26.779 --rc genhtml_function_coverage=1 00:03:26.779 --rc genhtml_legend=1 00:03:26.779 --rc geninfo_all_blocks=1 00:03:26.779 --no-external' 00:03:26.779 20:24:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:26.779 --rc lcov_branch_coverage=1 00:03:26.779 --rc lcov_function_coverage=1 00:03:26.779 --rc genhtml_branch_coverage=1 00:03:26.779 --rc genhtml_function_coverage=1 00:03:26.779 --rc genhtml_legend=1 00:03:26.779 --rc geninfo_all_blocks=1 00:03:26.779 --no-external' 00:03:26.779 20:24:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:26.779 lcov: LCOV version 1.14 00:03:26.779 20:24:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:28.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:28.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:28.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:28.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:28.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:28.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:28.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:28.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:28.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:28.197 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:28.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:28.457 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:28.458 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:28.458 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:28.458 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:28.458 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:28.458 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:28.458 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:28.458 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:28.458 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:28.458 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:28.458 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:28.458 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:28.458 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:28.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:28.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:28.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:28.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:28.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:28.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:28.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:41.186 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.186 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.387 20:24:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:53.387 20:24:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:53.387 20:24:39 -- common/autotest_common.sh@10 -- # set +x 00:03:53.387 20:24:39 -- spdk/autotest.sh@91 -- # rm -f 00:03:53.387 20:24:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.293 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:55.293 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:55.552 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:55.552 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:55.552 20:24:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:55.552 20:24:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:55.552 20:24:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:55.552 20:24:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:55.552 20:24:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:55.552 20:24:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:55.552 20:24:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:55.552 20:24:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.552 20:24:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:55.552 20:24:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:55.552 20:24:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.552 20:24:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:55.552 20:24:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:55.552 20:24:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:55.552 20:24:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.552 No valid GPT data, bailing 00:03:55.552 20:24:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.552 20:24:43 -- scripts/common.sh@391 -- # pt= 00:03:55.552 20:24:43 -- scripts/common.sh@392 -- # return 1 00:03:55.552 20:24:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.552 1+0 records in 00:03:55.552 1+0 records out 00:03:55.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589252 s, 178 MB/s 00:03:55.552 20:24:43 -- spdk/autotest.sh@118 -- # sync 00:03:55.552 20:24:43 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.552 20:24:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.552 20:24:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.124 20:24:50 -- spdk/autotest.sh@124 -- # uname -s 00:04:02.124 20:24:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:02.124 20:24:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:02.124 20:24:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.124 20:24:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.124 20:24:50 -- common/autotest_common.sh@10 -- # set +x 00:04:02.124 ************************************ 00:04:02.124 START TEST setup.sh 00:04:02.124 ************************************ 00:04:02.124 20:24:50 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:02.124 * Looking for test storage... 00:04:02.124 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:02.124 20:24:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:02.383 20:24:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:02.383 20:24:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:02.383 20:24:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.383 20:24:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.383 20:24:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.383 ************************************ 00:04:02.383 START TEST acl 00:04:02.383 ************************************ 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:02.383 * Looking for test storage... 00:04:02.383 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:02.383 20:24:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.383 20:24:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.383 20:24:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:02.383 20:24:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:02.383 20:24:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:02.383 20:24:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:02.383 20:24:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:02.383 20:24:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.383 20:24:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.647 20:24:55 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:06.647 20:24:55 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:06.647 20:24:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.647 20:24:55 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:06.647 20:24:55 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.647 20:24:55 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:10.914 Hugepages 00:04:10.914 node hugesize free / total 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 00:04:10.914 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:10.914 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.173 20:24:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:04:11.173 20:24:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:11.173 20:24:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:11.173 20:24:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:11.173 20:24:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:11.173 20:24:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.173 20:24:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:11.173 20:24:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:11.173 20:24:59 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.173 20:24:59 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.173 20:24:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:11.173 ************************************ 00:04:11.173 START TEST denied 00:04:11.173 ************************************ 00:04:11.174 20:24:59 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:11.174 20:24:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:04:11.174 20:24:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:11.174 20:24:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:04:11.174 20:24:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.174 20:24:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:16.451 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.451 20:25:03 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.729 00:04:21.729 real 0m9.727s 00:04:21.729 user 0m3.189s 00:04:21.729 sys 0m5.928s 00:04:21.729 20:25:09 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.729 20:25:09 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:21.729 ************************************ 00:04:21.729 END TEST denied 00:04:21.729 ************************************ 00:04:21.729 20:25:09 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:21.729 20:25:09 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.729 20:25:09 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.729 20:25:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:21.729 ************************************ 00:04:21.729 START TEST allowed 00:04:21.729 ************************************ 00:04:21.729 20:25:09 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:21.729 20:25:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:21.729 20:25:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:21.729 20:25:09 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:21.729 20:25:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.729 20:25:09 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:27.005 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.005 20:25:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:27.005 20:25:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:27.005 20:25:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:27.005 20:25:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.005 20:25:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.281 00:04:32.281 real 0m10.388s 00:04:32.281 user 0m2.943s 00:04:32.281 sys 0m5.794s 00:04:32.281 20:25:19 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.281 20:25:19 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:32.281 ************************************ 00:04:32.281 END TEST allowed 00:04:32.281 ************************************ 00:04:32.281 00:04:32.281 real 0m29.104s 00:04:32.281 user 0m9.373s 00:04:32.281 sys 0m17.772s 00:04:32.281 20:25:19 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.281 20:25:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:32.281 ************************************ 00:04:32.281 END TEST acl 00:04:32.281 ************************************ 00:04:32.281 20:25:19 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:32.281 20:25:19 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.281 20:25:19 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.281 20:25:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.281 ************************************ 00:04:32.281 START TEST hugepages 00:04:32.281 ************************************ 00:04:32.281 20:25:19 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:32.281 * Looking for test storage... 00:04:32.281 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 34226788 kB' 'MemAvailable: 39371884 kB' 'Buffers: 4096 kB' 'Cached: 17334916 kB' 'SwapCached: 0 kB' 'Active: 13164240 kB' 'Inactive: 4709516 kB' 'Active(anon): 12685884 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538556 kB' 'Mapped: 185192 kB' 'Shmem: 12151140 kB' 'KReclaimable: 604716 kB' 'Slab: 1317524 kB' 'SReclaimable: 604716 kB' 'SUnreclaim: 712808 kB' 'KernelStack: 22624 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 14177332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.281 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.282 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:32.283 20:25:20 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:32.283 20:25:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.283 20:25:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.283 20:25:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.283 ************************************ 00:04:32.283 START TEST default_setup 00:04:32.283 ************************************ 00:04:32.283 20:25:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:32.283 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:32.283 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.284 20:25:20 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:35.574 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:35.574 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:37.486 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.486 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36366504 kB' 'MemAvailable: 41511504 kB' 'Buffers: 4096 kB' 'Cached: 17335052 kB' 'SwapCached: 0 kB' 'Active: 13175412 kB' 'Inactive: 4709516 kB' 'Active(anon): 12697056 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548944 kB' 'Mapped: 185432 kB' 'Shmem: 12151276 kB' 'KReclaimable: 604620 kB' 'Slab: 1315536 kB' 'SReclaimable: 604620 kB' 'SUnreclaim: 710916 kB' 'KernelStack: 22656 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14190868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220628 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.487 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36369444 kB' 'MemAvailable: 41514444 kB' 'Buffers: 4096 kB' 'Cached: 17335056 kB' 'SwapCached: 0 kB' 'Active: 13175504 kB' 'Inactive: 4709516 kB' 'Active(anon): 12697148 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549228 kB' 'Mapped: 185416 kB' 'Shmem: 12151280 kB' 'KReclaimable: 604620 kB' 'Slab: 1315560 kB' 'SReclaimable: 604620 kB' 'SUnreclaim: 710940 kB' 'KernelStack: 22672 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14189152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220564 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.488 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36369448 kB' 'MemAvailable: 41514448 kB' 'Buffers: 4096 kB' 'Cached: 17335072 kB' 'SwapCached: 0 kB' 'Active: 13175340 kB' 'Inactive: 4709516 kB' 'Active(anon): 12696984 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548948 kB' 'Mapped: 185336 kB' 'Shmem: 12151296 kB' 'KReclaimable: 604620 kB' 'Slab: 1315552 kB' 'SReclaimable: 604620 kB' 'SUnreclaim: 710932 kB' 'KernelStack: 22784 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14190660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.489 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.490 nr_hugepages=1024 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.490 resv_hugepages=0 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.490 surplus_hugepages=0 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.490 anon_hugepages=0 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36370136 kB' 'MemAvailable: 41515136 kB' 'Buffers: 4096 kB' 'Cached: 17335096 kB' 'SwapCached: 0 kB' 'Active: 13175092 kB' 'Inactive: 4709516 kB' 'Active(anon): 12696736 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548648 kB' 'Mapped: 185336 kB' 'Shmem: 12151320 kB' 'KReclaimable: 604620 kB' 'Slab: 1315552 kB' 'SReclaimable: 604620 kB' 'SUnreclaim: 710932 kB' 'KernelStack: 22736 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14190680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220628 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.490 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21832828 kB' 'MemUsed: 10759256 kB' 'SwapCached: 0 kB' 'Active: 6696416 kB' 'Inactive: 569080 kB' 'Active(anon): 6419104 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7110888 kB' 'Mapped: 69996 kB' 'AnonPages: 157724 kB' 'Shmem: 6264496 kB' 'KernelStack: 11976 kB' 'PageTables: 4904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389632 kB' 'Slab: 728096 kB' 'SReclaimable: 389632 kB' 'SUnreclaim: 338464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.491 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:37.492 node0=1024 expecting 1024 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:37.492 00:04:37.492 real 0m5.871s 00:04:37.492 user 0m1.340s 00:04:37.492 sys 0m2.559s 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.492 20:25:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:37.492 ************************************ 00:04:37.492 END TEST default_setup 00:04:37.492 ************************************ 00:04:37.492 20:25:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:37.492 20:25:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.492 20:25:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.492 20:25:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.752 ************************************ 00:04:37.752 START TEST per_node_1G_alloc 00:04:37.752 ************************************ 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.752 20:25:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:41.954 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:41.954 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36388740 kB' 'MemAvailable: 41533732 kB' 'Buffers: 4096 kB' 'Cached: 17335212 kB' 'SwapCached: 0 kB' 'Active: 13174892 kB' 'Inactive: 4709516 kB' 'Active(anon): 12696536 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548144 kB' 'Mapped: 184204 kB' 'Shmem: 12151436 kB' 'KReclaimable: 604612 kB' 'Slab: 1316136 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 711524 kB' 'KernelStack: 22512 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14182236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.954 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.955 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36388604 kB' 'MemAvailable: 41533596 kB' 'Buffers: 4096 kB' 'Cached: 17335232 kB' 'SwapCached: 0 kB' 'Active: 13173780 kB' 'Inactive: 4709516 kB' 'Active(anon): 12695424 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547404 kB' 'Mapped: 184192 kB' 'Shmem: 12151456 kB' 'KReclaimable: 604612 kB' 'Slab: 1316204 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 711592 kB' 'KernelStack: 22496 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14182256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220564 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.956 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.957 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36388100 kB' 'MemAvailable: 41533092 kB' 'Buffers: 4096 kB' 'Cached: 17335232 kB' 'SwapCached: 0 kB' 'Active: 13174192 kB' 'Inactive: 4709516 kB' 'Active(anon): 12695836 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547816 kB' 'Mapped: 184192 kB' 'Shmem: 12151456 kB' 'KReclaimable: 604612 kB' 'Slab: 1316204 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 711592 kB' 'KernelStack: 22496 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14182276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220580 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.958 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.959 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.960 nr_hugepages=1024 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.960 resv_hugepages=0 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.960 surplus_hugepages=0 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.960 anon_hugepages=0 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36388100 kB' 'MemAvailable: 41533092 kB' 'Buffers: 4096 kB' 'Cached: 17335256 kB' 'SwapCached: 0 kB' 'Active: 13174520 kB' 'Inactive: 4709516 kB' 'Active(anon): 12696164 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548160 kB' 'Mapped: 184192 kB' 'Shmem: 12151480 kB' 'KReclaimable: 604612 kB' 'Slab: 1316204 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 711592 kB' 'KernelStack: 22528 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14184908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220580 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.960 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.961 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22884780 kB' 'MemUsed: 9707304 kB' 'SwapCached: 0 kB' 'Active: 6696892 kB' 'Inactive: 569080 kB' 'Active(anon): 6419580 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7110944 kB' 'Mapped: 69468 kB' 'AnonPages: 158360 kB' 'Shmem: 6264552 kB' 'KernelStack: 11688 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389632 kB' 'Slab: 728560 kB' 'SReclaimable: 389632 kB' 'SUnreclaim: 338928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.962 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13502564 kB' 'MemUsed: 14200544 kB' 'SwapCached: 0 kB' 'Active: 6477980 kB' 'Inactive: 4140436 kB' 'Active(anon): 6276936 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10228436 kB' 'Mapped: 114724 kB' 'AnonPages: 390084 kB' 'Shmem: 5886956 kB' 'KernelStack: 10760 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214980 kB' 'Slab: 587644 kB' 'SReclaimable: 214980 kB' 'SUnreclaim: 372664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.963 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.964 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.965 node0=512 expecting 512 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:41.965 node1=512 expecting 512 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:41.965 00:04:41.965 real 0m4.328s 00:04:41.965 user 0m1.653s 00:04:41.965 sys 0m2.755s 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.965 20:25:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.965 ************************************ 00:04:41.965 END TEST per_node_1G_alloc 00:04:41.965 ************************************ 00:04:41.965 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:41.965 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.965 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.965 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.965 ************************************ 00:04:41.965 START TEST even_2G_alloc 00:04:41.965 ************************************ 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.965 20:25:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:46.164 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:46.164 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36381832 kB' 'MemAvailable: 41526824 kB' 'Buffers: 4096 kB' 'Cached: 17335376 kB' 'SwapCached: 0 kB' 'Active: 13175896 kB' 'Inactive: 4709516 kB' 'Active(anon): 12697540 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548792 kB' 'Mapped: 184292 kB' 'Shmem: 12151600 kB' 'KReclaimable: 604612 kB' 'Slab: 1317100 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 712488 kB' 'KernelStack: 22528 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14183080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220644 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.164 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.165 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36382476 kB' 'MemAvailable: 41527468 kB' 'Buffers: 4096 kB' 'Cached: 17335396 kB' 'SwapCached: 0 kB' 'Active: 13174716 kB' 'Inactive: 4709516 kB' 'Active(anon): 12696360 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548020 kB' 'Mapped: 184200 kB' 'Shmem: 12151620 kB' 'KReclaimable: 604612 kB' 'Slab: 1317096 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 712484 kB' 'KernelStack: 22496 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14183100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220628 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.166 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.167 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36382224 kB' 'MemAvailable: 41527216 kB' 'Buffers: 4096 kB' 'Cached: 17335396 kB' 'SwapCached: 0 kB' 'Active: 13175056 kB' 'Inactive: 4709516 kB' 'Active(anon): 12696700 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548436 kB' 'Mapped: 184200 kB' 'Shmem: 12151620 kB' 'KReclaimable: 604612 kB' 'Slab: 1317096 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 712484 kB' 'KernelStack: 22512 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14183120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220644 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.168 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.169 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.170 nr_hugepages=1024 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.170 resv_hugepages=0 00:04:46.170 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.170 surplus_hugepages=0 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.171 anon_hugepages=0 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36381064 kB' 'MemAvailable: 41526056 kB' 'Buffers: 4096 kB' 'Cached: 17335420 kB' 'SwapCached: 0 kB' 'Active: 13176344 kB' 'Inactive: 4709516 kB' 'Active(anon): 12697988 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549760 kB' 'Mapped: 184200 kB' 'Shmem: 12151644 kB' 'KReclaimable: 604612 kB' 'Slab: 1317096 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 712484 kB' 'KernelStack: 22560 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14228872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220660 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.171 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.172 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22884832 kB' 'MemUsed: 9707252 kB' 'SwapCached: 0 kB' 'Active: 6697564 kB' 'Inactive: 569080 kB' 'Active(anon): 6420252 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7110980 kB' 'Mapped: 69468 kB' 'AnonPages: 158924 kB' 'Shmem: 6264588 kB' 'KernelStack: 11720 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389632 kB' 'Slab: 729004 kB' 'SReclaimable: 389632 kB' 'SUnreclaim: 339372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.173 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.174 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13498412 kB' 'MemUsed: 14204696 kB' 'SwapCached: 0 kB' 'Active: 6480856 kB' 'Inactive: 4140436 kB' 'Active(anon): 6279812 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10228576 kB' 'Mapped: 115236 kB' 'AnonPages: 392868 kB' 'Shmem: 5887096 kB' 'KernelStack: 10792 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214980 kB' 'Slab: 588068 kB' 'SReclaimable: 214980 kB' 'SUnreclaim: 373088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.175 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.176 node0=512 expecting 512 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:46.176 node1=512 expecting 512 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.176 00:04:46.176 real 0m4.029s 00:04:46.176 user 0m1.416s 00:04:46.176 sys 0m2.632s 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.176 20:25:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.176 ************************************ 00:04:46.176 END TEST even_2G_alloc 00:04:46.176 ************************************ 00:04:46.176 20:25:34 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:46.176 20:25:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.176 20:25:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.176 20:25:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.176 ************************************ 00:04:46.176 START TEST odd_alloc 00:04:46.176 ************************************ 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.176 20:25:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:49.557 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:49.557 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.557 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36374152 kB' 'MemAvailable: 41519144 kB' 'Buffers: 4096 kB' 'Cached: 17335556 kB' 'SwapCached: 0 kB' 'Active: 13176968 kB' 'Inactive: 4709516 kB' 'Active(anon): 12698612 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550280 kB' 'Mapped: 185116 kB' 'Shmem: 12151780 kB' 'KReclaimable: 604612 kB' 'Slab: 1316140 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 711528 kB' 'KernelStack: 22608 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14185568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.558 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36374864 kB' 'MemAvailable: 41519856 kB' 'Buffers: 4096 kB' 'Cached: 17335560 kB' 'SwapCached: 0 kB' 'Active: 13177512 kB' 'Inactive: 4709516 kB' 'Active(anon): 12699156 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550484 kB' 'Mapped: 185616 kB' 'Shmem: 12151784 kB' 'KReclaimable: 604612 kB' 'Slab: 1316124 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 711512 kB' 'KernelStack: 22640 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14186948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.825 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.826 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36375168 kB' 'MemAvailable: 41520160 kB' 'Buffers: 4096 kB' 'Cached: 17335576 kB' 'SwapCached: 0 kB' 'Active: 13177008 kB' 'Inactive: 4709516 kB' 'Active(anon): 12698652 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550280 kB' 'Mapped: 184232 kB' 'Shmem: 12151800 kB' 'KReclaimable: 604612 kB' 'Slab: 1316208 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 711596 kB' 'KernelStack: 22640 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14186968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.827 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.828 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:49.829 nr_hugepages=1025 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.829 resv_hugepages=0 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.829 surplus_hugepages=0 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.829 anon_hugepages=0 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36373588 kB' 'MemAvailable: 41518580 kB' 'Buffers: 4096 kB' 'Cached: 17335592 kB' 'SwapCached: 0 kB' 'Active: 13176508 kB' 'Inactive: 4709516 kB' 'Active(anon): 12698152 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549700 kB' 'Mapped: 184224 kB' 'Shmem: 12151816 kB' 'KReclaimable: 604612 kB' 'Slab: 1316208 kB' 'SReclaimable: 604612 kB' 'SUnreclaim: 711596 kB' 'KernelStack: 22688 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14186988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.829 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.830 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22894216 kB' 'MemUsed: 9697868 kB' 'SwapCached: 0 kB' 'Active: 6698704 kB' 'Inactive: 569080 kB' 'Active(anon): 6421392 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7111020 kB' 'Mapped: 69480 kB' 'AnonPages: 159992 kB' 'Shmem: 6264628 kB' 'KernelStack: 11896 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389632 kB' 'Slab: 728208 kB' 'SReclaimable: 389632 kB' 'SUnreclaim: 338576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.831 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 13477752 kB' 'MemUsed: 14225356 kB' 'SwapCached: 0 kB' 'Active: 6477748 kB' 'Inactive: 4140436 kB' 'Active(anon): 6276704 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10228708 kB' 'Mapped: 114744 kB' 'AnonPages: 389568 kB' 'Shmem: 5887228 kB' 'KernelStack: 10696 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214980 kB' 'Slab: 588000 kB' 'SReclaimable: 214980 kB' 'SUnreclaim: 373020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.832 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.833 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:49.834 node0=512 expecting 513 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:49.834 node1=513 expecting 512 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:49.834 00:04:49.834 real 0m3.718s 00:04:49.834 user 0m1.187s 00:04:49.834 sys 0m2.373s 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.834 20:25:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.834 ************************************ 00:04:49.834 END TEST odd_alloc 00:04:49.834 ************************************ 00:04:49.834 20:25:38 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:49.834 20:25:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.834 20:25:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.834 20:25:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.094 ************************************ 00:04:50.094 START TEST custom_alloc 00:04:50.094 ************************************ 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.094 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.095 20:25:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:54.301 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:54.301 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35311712 kB' 'MemAvailable: 40456640 kB' 'Buffers: 4096 kB' 'Cached: 17335716 kB' 'SwapCached: 0 kB' 'Active: 13177312 kB' 'Inactive: 4709516 kB' 'Active(anon): 12698956 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550336 kB' 'Mapped: 184224 kB' 'Shmem: 12151940 kB' 'KReclaimable: 604548 kB' 'Slab: 1316024 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 711476 kB' 'KernelStack: 22512 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14184876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220644 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.301 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.302 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35312396 kB' 'MemAvailable: 40457324 kB' 'Buffers: 4096 kB' 'Cached: 17335720 kB' 'SwapCached: 0 kB' 'Active: 13177476 kB' 'Inactive: 4709516 kB' 'Active(anon): 12699120 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550592 kB' 'Mapped: 184224 kB' 'Shmem: 12151944 kB' 'KReclaimable: 604548 kB' 'Slab: 1316076 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 711528 kB' 'KernelStack: 22528 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14185012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220628 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.303 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.304 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35313108 kB' 'MemAvailable: 40458036 kB' 'Buffers: 4096 kB' 'Cached: 17335756 kB' 'SwapCached: 0 kB' 'Active: 13177088 kB' 'Inactive: 4709516 kB' 'Active(anon): 12698732 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550080 kB' 'Mapped: 184224 kB' 'Shmem: 12151980 kB' 'KReclaimable: 604548 kB' 'Slab: 1316084 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 711536 kB' 'KernelStack: 22496 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14184916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.305 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.306 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:54.307 nr_hugepages=1536 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.307 resv_hugepages=0 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.307 surplus_hugepages=0 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.307 anon_hugepages=0 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35319056 kB' 'MemAvailable: 40463984 kB' 'Buffers: 4096 kB' 'Cached: 17335776 kB' 'SwapCached: 0 kB' 'Active: 13177108 kB' 'Inactive: 4709516 kB' 'Active(anon): 12698752 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550080 kB' 'Mapped: 184224 kB' 'Shmem: 12152000 kB' 'KReclaimable: 604548 kB' 'Slab: 1316068 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 711520 kB' 'KernelStack: 22496 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14184936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.307 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.308 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22888624 kB' 'MemUsed: 9703460 kB' 'SwapCached: 0 kB' 'Active: 6697384 kB' 'Inactive: 569080 kB' 'Active(anon): 6420072 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7111024 kB' 'Mapped: 69468 kB' 'AnonPages: 158684 kB' 'Shmem: 6264632 kB' 'KernelStack: 11736 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389568 kB' 'Slab: 728052 kB' 'SReclaimable: 389568 kB' 'SUnreclaim: 338484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.309 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.310 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 12430180 kB' 'MemUsed: 15272928 kB' 'SwapCached: 0 kB' 'Active: 6480108 kB' 'Inactive: 4140436 kB' 'Active(anon): 6279064 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10228872 kB' 'Mapped: 114756 kB' 'AnonPages: 391784 kB' 'Shmem: 5887392 kB' 'KernelStack: 10776 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214980 kB' 'Slab: 588016 kB' 'SReclaimable: 214980 kB' 'SUnreclaim: 373036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.311 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.312 node0=512 expecting 512 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:54.312 node1=1024 expecting 1024 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:54.312 00:04:54.312 real 0m3.964s 00:04:54.312 user 0m1.453s 00:04:54.312 sys 0m2.546s 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.312 20:25:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.312 ************************************ 00:04:54.312 END TEST custom_alloc 00:04:54.312 ************************************ 00:04:54.312 20:25:42 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:54.312 20:25:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.312 20:25:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.312 20:25:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.312 ************************************ 00:04:54.312 START TEST no_shrink_alloc 00:04:54.312 ************************************ 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.312 20:25:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:58.513 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:58.513 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.513 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36310340 kB' 'MemAvailable: 41455268 kB' 'Buffers: 4096 kB' 'Cached: 17335892 kB' 'SwapCached: 0 kB' 'Active: 13179896 kB' 'Inactive: 4709516 kB' 'Active(anon): 12701540 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552644 kB' 'Mapped: 185372 kB' 'Shmem: 12152116 kB' 'KReclaimable: 604548 kB' 'Slab: 1316620 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 712072 kB' 'KernelStack: 22624 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14220232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.514 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36310340 kB' 'MemAvailable: 41455268 kB' 'Buffers: 4096 kB' 'Cached: 17335912 kB' 'SwapCached: 0 kB' 'Active: 13179432 kB' 'Inactive: 4709516 kB' 'Active(anon): 12701076 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552228 kB' 'Mapped: 185364 kB' 'Shmem: 12152136 kB' 'KReclaimable: 604548 kB' 'Slab: 1316688 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 712140 kB' 'KernelStack: 22608 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14220248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.515 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.516 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.517 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36310964 kB' 'MemAvailable: 41455892 kB' 'Buffers: 4096 kB' 'Cached: 17335916 kB' 'SwapCached: 0 kB' 'Active: 13179304 kB' 'Inactive: 4709516 kB' 'Active(anon): 12700948 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552092 kB' 'Mapped: 185364 kB' 'Shmem: 12152140 kB' 'KReclaimable: 604548 kB' 'Slab: 1316688 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 712140 kB' 'KernelStack: 22608 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14220272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.518 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.519 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.520 nr_hugepages=1024 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.520 resv_hugepages=0 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.520 surplus_hugepages=0 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.520 anon_hugepages=0 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36311220 kB' 'MemAvailable: 41456148 kB' 'Buffers: 4096 kB' 'Cached: 17335936 kB' 'SwapCached: 0 kB' 'Active: 13179480 kB' 'Inactive: 4709516 kB' 'Active(anon): 12701124 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552256 kB' 'Mapped: 185304 kB' 'Shmem: 12152160 kB' 'KReclaimable: 604548 kB' 'Slab: 1316688 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 712140 kB' 'KernelStack: 22608 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14220292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.520 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.521 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21832360 kB' 'MemUsed: 10759724 kB' 'SwapCached: 0 kB' 'Active: 6697180 kB' 'Inactive: 569080 kB' 'Active(anon): 6419868 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7111028 kB' 'Mapped: 70452 kB' 'AnonPages: 158372 kB' 'Shmem: 6264636 kB' 'KernelStack: 11736 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389568 kB' 'Slab: 728600 kB' 'SReclaimable: 389568 kB' 'SUnreclaim: 339032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.522 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.523 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.524 node0=1024 expecting 1024 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.524 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:02.724 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:02.724 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:02.724 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.724 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36317856 kB' 'MemAvailable: 41462784 kB' 'Buffers: 4096 kB' 'Cached: 17336032 kB' 'SwapCached: 0 kB' 'Active: 13182584 kB' 'Inactive: 4709516 kB' 'Active(anon): 12704228 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554820 kB' 'Mapped: 185916 kB' 'Shmem: 12152256 kB' 'KReclaimable: 604548 kB' 'Slab: 1317508 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 712960 kB' 'KernelStack: 22736 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14225160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220868 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36318116 kB' 'MemAvailable: 41463044 kB' 'Buffers: 4096 kB' 'Cached: 17336036 kB' 'SwapCached: 0 kB' 'Active: 13185416 kB' 'Inactive: 4709516 kB' 'Active(anon): 12707060 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557664 kB' 'Mapped: 185916 kB' 'Shmem: 12152260 kB' 'KReclaimable: 604548 kB' 'Slab: 1317364 kB' 'SReclaimable: 604548 kB' 'SUnreclaim: 712816 kB' 'KernelStack: 22768 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14228740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.725 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.726 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36314972 kB' 'MemAvailable: 41459836 kB' 'Buffers: 4096 kB' 'Cached: 17336056 kB' 'SwapCached: 0 kB' 'Active: 13181560 kB' 'Inactive: 4709516 kB' 'Active(anon): 12703204 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554324 kB' 'Mapped: 186280 kB' 'Shmem: 12152280 kB' 'KReclaimable: 604484 kB' 'Slab: 1317292 kB' 'SReclaimable: 604484 kB' 'SUnreclaim: 712808 kB' 'KernelStack: 22800 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14225064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.727 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.728 nr_hugepages=1024 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.728 resv_hugepages=0 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.728 surplus_hugepages=0 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.728 anon_hugepages=0 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36308336 kB' 'MemAvailable: 41453200 kB' 'Buffers: 4096 kB' 'Cached: 17336076 kB' 'SwapCached: 0 kB' 'Active: 13184908 kB' 'Inactive: 4709516 kB' 'Active(anon): 12706552 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557668 kB' 'Mapped: 185940 kB' 'Shmem: 12152300 kB' 'KReclaimable: 604484 kB' 'Slab: 1317292 kB' 'SReclaimable: 604484 kB' 'SUnreclaim: 712808 kB' 'KernelStack: 22768 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14229048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.728 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21837240 kB' 'MemUsed: 10754844 kB' 'SwapCached: 0 kB' 'Active: 6697460 kB' 'Inactive: 569080 kB' 'Active(anon): 6420148 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7111028 kB' 'Mapped: 70616 kB' 'AnonPages: 158616 kB' 'Shmem: 6264636 kB' 'KernelStack: 11928 kB' 'PageTables: 4948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389440 kB' 'Slab: 728712 kB' 'SReclaimable: 389440 kB' 'SUnreclaim: 339272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.729 node0=1024 expecting 1024 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.729 00:05:02.729 real 0m8.257s 00:05:02.729 user 0m3.007s 00:05:02.729 sys 0m5.327s 00:05:02.729 20:25:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.730 20:25:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.730 ************************************ 00:05:02.730 END TEST no_shrink_alloc 00:05:02.730 ************************************ 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:02.730 20:25:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:02.730 00:05:02.730 real 0m30.839s 00:05:02.730 user 0m10.321s 00:05:02.730 sys 0m18.646s 00:05:02.730 20:25:50 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.730 20:25:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.730 ************************************ 00:05:02.730 END TEST hugepages 00:05:02.730 ************************************ 00:05:02.730 20:25:50 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:02.730 20:25:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.730 20:25:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.730 20:25:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.730 ************************************ 00:05:02.730 START TEST driver 00:05:02.730 ************************************ 00:05:02.730 20:25:50 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:02.730 * Looking for test storage... 00:05:02.730 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:02.730 20:25:50 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:02.730 20:25:50 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.730 20:25:50 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.004 20:25:56 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:08.004 20:25:56 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.004 20:25:56 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.004 20:25:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:08.004 ************************************ 00:05:08.004 START TEST guess_driver 00:05:08.004 ************************************ 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:08.004 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:08.004 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:08.004 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:08.004 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:08.004 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:08.004 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:08.004 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:08.004 Looking for driver=vfio-pci 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.004 20:25:56 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.199 20:26:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.108 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:14.108 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:14.108 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.108 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:14.108 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:14.108 20:26:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.108 20:26:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:19.384 00:05:19.384 real 0m11.354s 00:05:19.384 user 0m2.903s 00:05:19.384 sys 0m5.766s 00:05:19.384 20:26:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.384 20:26:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.384 ************************************ 00:05:19.384 END TEST guess_driver 00:05:19.384 ************************************ 00:05:19.384 00:05:19.384 real 0m16.932s 00:05:19.384 user 0m4.554s 00:05:19.384 sys 0m8.925s 00:05:19.384 20:26:07 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.384 20:26:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.384 ************************************ 00:05:19.384 END TEST driver 00:05:19.384 ************************************ 00:05:19.384 20:26:07 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:19.384 20:26:07 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.384 20:26:07 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.384 20:26:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.384 ************************************ 00:05:19.384 START TEST devices 00:05:19.384 ************************************ 00:05:19.384 20:26:07 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:19.644 * Looking for test storage... 00:05:19.644 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:19.644 20:26:07 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:19.644 20:26:07 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:19.644 20:26:07 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.644 20:26:07 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:23.839 20:26:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:23.839 20:26:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:23.839 20:26:11 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:23.839 20:26:11 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:23.839 20:26:11 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:23.839 20:26:11 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:23.839 20:26:11 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.839 20:26:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:23.839 20:26:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:23.839 20:26:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:23.839 20:26:11 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:23.839 No valid GPT data, bailing 00:05:23.839 20:26:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.839 20:26:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:23.839 20:26:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:23.839 20:26:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:23.839 20:26:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:23.839 20:26:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:23.839 20:26:12 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:23.839 20:26:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:23.839 20:26:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.839 20:26:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:23.839 20:26:12 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:23.839 20:26:12 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:23.839 20:26:12 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:23.839 20:26:12 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.839 20:26:12 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.839 20:26:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:23.839 ************************************ 00:05:23.839 START TEST nvme_mount 00:05:23.839 ************************************ 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:23.839 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:24.778 Creating new GPT entries in memory. 00:05:24.778 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.778 other utilities. 00:05:24.778 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.778 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.778 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.778 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.778 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:25.715 Creating new GPT entries in memory. 00:05:25.715 The operation has completed successfully. 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 897589 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.715 20:26:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.001 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.260 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.519 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:29.519 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:29.519 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:29.519 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:29.519 20:26:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.778 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:29.778 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:29.778 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:29.778 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.778 20:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:33.971 20:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.971 20:26:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.261 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.261 00:05:37.261 real 0m13.706s 00:05:37.261 user 0m3.832s 00:05:37.261 sys 0m7.625s 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.261 20:26:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:37.261 ************************************ 00:05:37.261 END TEST nvme_mount 00:05:37.261 ************************************ 00:05:37.261 20:26:25 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:37.261 20:26:25 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.261 20:26:25 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.261 20:26:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:37.520 ************************************ 00:05:37.520 START TEST dm_mount 00:05:37.520 ************************************ 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:37.520 20:26:25 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:38.457 Creating new GPT entries in memory. 00:05:38.457 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:38.457 other utilities. 00:05:38.457 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:38.458 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.458 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:38.458 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:38.458 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:39.395 Creating new GPT entries in memory. 00:05:39.395 The operation has completed successfully. 00:05:39.395 20:26:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:39.395 20:26:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:39.395 20:26:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:39.395 20:26:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:39.395 20:26:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:40.813 The operation has completed successfully. 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 902661 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.813 20:26:28 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.813 20:26:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.106 20:26:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:47.396 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.396 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.396 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.396 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.396 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:47.397 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:47.397 00:05:47.397 real 0m10.028s 00:05:47.397 user 0m2.188s 00:05:47.397 sys 0m4.614s 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.397 20:26:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:47.397 ************************************ 00:05:47.397 END TEST dm_mount 00:05:47.397 ************************************ 00:05:47.397 20:26:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:47.397 20:26:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:47.397 20:26:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:47.397 20:26:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.397 20:26:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:47.397 20:26:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:47.397 20:26:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:47.657 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:47.657 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:47.657 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:47.657 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:47.657 20:26:36 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:47.657 20:26:36 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:47.657 20:26:36 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:47.657 20:26:36 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:47.657 20:26:36 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:47.657 20:26:36 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:47.657 20:26:36 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:47.657 00:05:47.657 real 0m28.348s 00:05:47.657 user 0m7.442s 00:05:47.657 sys 0m15.172s 00:05:47.657 20:26:36 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.657 20:26:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 ************************************ 00:05:47.657 END TEST devices 00:05:47.657 ************************************ 00:05:47.915 00:05:47.915 real 1m45.673s 00:05:47.916 user 0m31.839s 00:05:47.916 sys 1m0.852s 00:05:47.916 20:26:36 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.916 20:26:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:47.916 ************************************ 00:05:47.916 END TEST setup.sh 00:05:47.916 ************************************ 00:05:47.916 20:26:36 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:52.105 Hugepages 00:05:52.105 node hugesize free / total 00:05:52.105 node0 1048576kB 0 / 0 00:05:52.105 node0 2048kB 2048 / 2048 00:05:52.105 node1 1048576kB 0 / 0 00:05:52.105 node1 2048kB 0 / 0 00:05:52.105 00:05:52.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:52.105 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:52.105 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:52.105 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:52.105 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:52.105 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:52.105 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:52.105 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:52.105 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:52.105 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:52.105 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:52.105 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:52.105 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:52.105 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:52.105 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:52.105 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:52.105 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:52.105 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:52.105 20:26:40 -- spdk/autotest.sh@130 -- # uname -s 00:05:52.105 20:26:40 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:52.105 20:26:40 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:52.105 20:26:40 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:56.298 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:56.298 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:57.679 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:57.938 20:26:46 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:58.875 20:26:47 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:58.876 20:26:47 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:58.876 20:26:47 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:58.876 20:26:47 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:58.876 20:26:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:58.876 20:26:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:58.876 20:26:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:58.876 20:26:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:58.876 20:26:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:58.876 20:26:47 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:58.876 20:26:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:58.876 20:26:47 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:03.064 Waiting for block devices as requested 00:06:03.064 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:03.064 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:03.323 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:03.323 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:03.323 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:03.582 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:03.582 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:03.582 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:06:03.841 20:26:52 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:03.841 20:26:52 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:06:03.841 20:26:52 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:03.841 20:26:52 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:06:03.841 20:26:52 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:03.841 20:26:52 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:06:03.841 20:26:52 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:03.841 20:26:52 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:03.841 20:26:52 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:03.841 20:26:52 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:03.841 20:26:52 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:03.841 20:26:52 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:03.841 20:26:52 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:03.841 20:26:52 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:03.841 20:26:52 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:03.841 20:26:52 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:03.841 20:26:52 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:03.841 20:26:52 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:03.841 20:26:52 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:03.841 20:26:52 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:03.841 20:26:52 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:03.841 20:26:52 -- common/autotest_common.sh@1557 -- # continue 00:06:03.841 20:26:52 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:03.841 20:26:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:03.841 20:26:52 -- common/autotest_common.sh@10 -- # set +x 00:06:03.841 20:26:52 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:03.841 20:26:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.841 20:26:52 -- common/autotest_common.sh@10 -- # set +x 00:06:03.841 20:26:52 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:08.065 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:08.065 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:09.971 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:09.971 20:26:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:09.971 20:26:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.971 20:26:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.971 20:26:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:09.971 20:26:58 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:09.971 20:26:58 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:09.971 20:26:58 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:09.971 20:26:58 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:09.971 20:26:58 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:09.971 20:26:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:09.971 20:26:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:09.971 20:26:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:09.971 20:26:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:09.971 20:26:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:09.971 20:26:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:09.971 20:26:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:06:09.971 20:26:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:09.971 20:26:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:06:09.971 20:26:58 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:09.971 20:26:58 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:09.971 20:26:58 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:09.971 20:26:58 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:06:09.971 20:26:58 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:06:09.971 20:26:58 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=913850 00:06:09.971 20:26:58 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.971 20:26:58 -- common/autotest_common.sh@1598 -- # waitforlisten 913850 00:06:09.971 20:26:58 -- common/autotest_common.sh@831 -- # '[' -z 913850 ']' 00:06:09.971 20:26:58 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.971 20:26:58 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.971 20:26:58 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.971 20:26:58 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.971 20:26:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.971 [2024-07-26 20:26:58.497042] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:06:09.971 [2024-07-26 20:26:58.497094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913850 ] 00:06:10.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.229 [2024-07-26 20:26:58.581930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.229 [2024-07-26 20:26:58.622120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.795 20:26:59 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.795 20:26:59 -- common/autotest_common.sh@864 -- # return 0 00:06:10.795 20:26:59 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:10.795 20:26:59 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:10.795 20:26:59 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:06:14.074 nvme0n1 00:06:14.074 20:27:02 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:14.074 [2024-07-26 20:27:02.451796] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:14.074 request: 00:06:14.074 { 00:06:14.074 "nvme_ctrlr_name": "nvme0", 00:06:14.074 "password": "test", 00:06:14.074 "method": "bdev_nvme_opal_revert", 00:06:14.074 "req_id": 1 00:06:14.074 } 00:06:14.074 Got JSON-RPC error response 00:06:14.074 response: 00:06:14.074 { 00:06:14.074 "code": -32602, 00:06:14.074 "message": "Invalid parameters" 00:06:14.074 } 00:06:14.074 20:27:02 -- common/autotest_common.sh@1604 -- # true 00:06:14.074 20:27:02 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:14.074 20:27:02 -- common/autotest_common.sh@1608 -- # killprocess 913850 00:06:14.074 20:27:02 -- common/autotest_common.sh@950 -- # '[' -z 913850 ']' 00:06:14.074 20:27:02 -- common/autotest_common.sh@954 -- # kill -0 913850 00:06:14.074 20:27:02 -- common/autotest_common.sh@955 -- # uname 00:06:14.074 20:27:02 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.074 20:27:02 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 913850 00:06:14.074 20:27:02 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.074 20:27:02 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.074 20:27:02 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 913850' 00:06:14.074 killing process with pid 913850 00:06:14.074 20:27:02 -- common/autotest_common.sh@969 -- # kill 913850 00:06:14.074 20:27:02 -- common/autotest_common.sh@974 -- # wait 913850 00:06:16.601 20:27:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:16.601 20:27:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:16.601 20:27:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:16.601 20:27:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:16.601 20:27:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:16.601 20:27:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.601 20:27:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.601 20:27:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:16.601 20:27:05 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:16.601 20:27:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.601 20:27:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.601 20:27:05 -- common/autotest_common.sh@10 -- # set +x 00:06:16.601 ************************************ 00:06:16.601 START TEST env 00:06:16.601 ************************************ 00:06:16.601 20:27:05 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:16.860 * Looking for test storage... 00:06:16.860 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:06:16.860 20:27:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:16.860 20:27:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.860 20:27:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.860 20:27:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.860 ************************************ 00:06:16.860 START TEST env_memory 00:06:16.860 ************************************ 00:06:16.860 20:27:05 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:16.860 00:06:16.860 00:06:16.860 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.860 http://cunit.sourceforge.net/ 00:06:16.860 00:06:16.860 00:06:16.860 Suite: memory 00:06:16.860 Test: alloc and free memory map ...[2024-07-26 20:27:05.287035] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:16.860 passed 00:06:16.860 Test: mem map translation ...[2024-07-26 20:27:05.306078] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:16.860 [2024-07-26 20:27:05.306096] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:16.860 [2024-07-26 20:27:05.306133] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:16.860 [2024-07-26 20:27:05.306142] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:16.860 passed 00:06:16.860 Test: mem map registration ...[2024-07-26 20:27:05.343051] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:16.860 [2024-07-26 20:27:05.343069] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:16.860 passed 00:06:16.860 Test: mem map adjacent registrations ...passed 00:06:16.860 00:06:16.860 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.860 suites 1 1 n/a 0 0 00:06:16.860 tests 4 4 4 0 0 00:06:16.860 asserts 152 152 152 0 n/a 00:06:16.860 00:06:16.860 Elapsed time = 0.136 seconds 00:06:16.860 00:06:16.860 real 0m0.149s 00:06:16.860 user 0m0.137s 00:06:16.860 sys 0m0.011s 00:06:16.860 20:27:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.860 20:27:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:16.860 ************************************ 00:06:16.860 END TEST env_memory 00:06:16.860 ************************************ 00:06:17.119 20:27:05 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:17.119 20:27:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.119 20:27:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.119 20:27:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.119 ************************************ 00:06:17.119 START TEST env_vtophys 00:06:17.119 ************************************ 00:06:17.119 20:27:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:17.119 EAL: lib.eal log level changed from notice to debug 00:06:17.119 EAL: Detected lcore 0 as core 0 on socket 0 00:06:17.119 EAL: Detected lcore 1 as core 1 on socket 0 00:06:17.119 EAL: Detected lcore 2 as core 2 on socket 0 00:06:17.119 EAL: Detected lcore 3 as core 3 on socket 0 00:06:17.119 EAL: Detected lcore 4 as core 4 on socket 0 00:06:17.119 EAL: Detected lcore 5 as core 5 on socket 0 00:06:17.119 EAL: Detected lcore 6 as core 6 on socket 0 00:06:17.119 EAL: Detected lcore 7 as core 8 on socket 0 00:06:17.119 EAL: Detected lcore 8 as core 9 on socket 0 00:06:17.119 EAL: Detected lcore 9 as core 10 on socket 0 00:06:17.119 EAL: Detected lcore 10 as core 11 on socket 0 00:06:17.119 EAL: Detected lcore 11 as core 12 on socket 0 00:06:17.119 EAL: Detected lcore 12 as core 13 on socket 0 00:06:17.119 EAL: Detected lcore 13 as core 14 on socket 0 00:06:17.119 EAL: Detected lcore 14 as core 16 on socket 0 00:06:17.119 EAL: Detected lcore 15 as core 17 on socket 0 00:06:17.119 EAL: Detected lcore 16 as core 18 on socket 0 00:06:17.119 EAL: Detected lcore 17 as core 19 on socket 0 00:06:17.119 EAL: Detected lcore 18 as core 20 on socket 0 00:06:17.119 EAL: Detected lcore 19 as core 21 on socket 0 00:06:17.119 EAL: Detected lcore 20 as core 22 on socket 0 00:06:17.119 EAL: Detected lcore 21 as core 24 on socket 0 00:06:17.119 EAL: Detected lcore 22 as core 25 on socket 0 00:06:17.119 EAL: Detected lcore 23 as core 26 on socket 0 00:06:17.119 EAL: Detected lcore 24 as core 27 on socket 0 00:06:17.119 EAL: Detected lcore 25 as core 28 on socket 0 00:06:17.119 EAL: Detected lcore 26 as core 29 on socket 0 00:06:17.119 EAL: Detected lcore 27 as core 30 on socket 0 00:06:17.119 EAL: Detected lcore 28 as core 0 on socket 1 00:06:17.119 EAL: Detected lcore 29 as core 1 on socket 1 00:06:17.119 EAL: Detected lcore 30 as core 2 on socket 1 00:06:17.119 EAL: Detected lcore 31 as core 3 on socket 1 00:06:17.119 EAL: Detected lcore 32 as core 4 on socket 1 00:06:17.119 EAL: Detected lcore 33 as core 5 on socket 1 00:06:17.119 EAL: Detected lcore 34 as core 6 on socket 1 00:06:17.119 EAL: Detected lcore 35 as core 8 on socket 1 00:06:17.119 EAL: Detected lcore 36 as core 9 on socket 1 00:06:17.119 EAL: Detected lcore 37 as core 10 on socket 1 00:06:17.119 EAL: Detected lcore 38 as core 11 on socket 1 00:06:17.119 EAL: Detected lcore 39 as core 12 on socket 1 00:06:17.119 EAL: Detected lcore 40 as core 13 on socket 1 00:06:17.119 EAL: Detected lcore 41 as core 14 on socket 1 00:06:17.119 EAL: Detected lcore 42 as core 16 on socket 1 00:06:17.119 EAL: Detected lcore 43 as core 17 on socket 1 00:06:17.119 EAL: Detected lcore 44 as core 18 on socket 1 00:06:17.119 EAL: Detected lcore 45 as core 19 on socket 1 00:06:17.119 EAL: Detected lcore 46 as core 20 on socket 1 00:06:17.119 EAL: Detected lcore 47 as core 21 on socket 1 00:06:17.119 EAL: Detected lcore 48 as core 22 on socket 1 00:06:17.119 EAL: Detected lcore 49 as core 24 on socket 1 00:06:17.119 EAL: Detected lcore 50 as core 25 on socket 1 00:06:17.119 EAL: Detected lcore 51 as core 26 on socket 1 00:06:17.119 EAL: Detected lcore 52 as core 27 on socket 1 00:06:17.119 EAL: Detected lcore 53 as core 28 on socket 1 00:06:17.119 EAL: Detected lcore 54 as core 29 on socket 1 00:06:17.119 EAL: Detected lcore 55 as core 30 on socket 1 00:06:17.119 EAL: Detected lcore 56 as core 0 on socket 0 00:06:17.119 EAL: Detected lcore 57 as core 1 on socket 0 00:06:17.119 EAL: Detected lcore 58 as core 2 on socket 0 00:06:17.119 EAL: Detected lcore 59 as core 3 on socket 0 00:06:17.119 EAL: Detected lcore 60 as core 4 on socket 0 00:06:17.119 EAL: Detected lcore 61 as core 5 on socket 0 00:06:17.119 EAL: Detected lcore 62 as core 6 on socket 0 00:06:17.119 EAL: Detected lcore 63 as core 8 on socket 0 00:06:17.119 EAL: Detected lcore 64 as core 9 on socket 0 00:06:17.119 EAL: Detected lcore 65 as core 10 on socket 0 00:06:17.119 EAL: Detected lcore 66 as core 11 on socket 0 00:06:17.119 EAL: Detected lcore 67 as core 12 on socket 0 00:06:17.119 EAL: Detected lcore 68 as core 13 on socket 0 00:06:17.119 EAL: Detected lcore 69 as core 14 on socket 0 00:06:17.119 EAL: Detected lcore 70 as core 16 on socket 0 00:06:17.119 EAL: Detected lcore 71 as core 17 on socket 0 00:06:17.119 EAL: Detected lcore 72 as core 18 on socket 0 00:06:17.119 EAL: Detected lcore 73 as core 19 on socket 0 00:06:17.119 EAL: Detected lcore 74 as core 20 on socket 0 00:06:17.119 EAL: Detected lcore 75 as core 21 on socket 0 00:06:17.119 EAL: Detected lcore 76 as core 22 on socket 0 00:06:17.119 EAL: Detected lcore 77 as core 24 on socket 0 00:06:17.119 EAL: Detected lcore 78 as core 25 on socket 0 00:06:17.119 EAL: Detected lcore 79 as core 26 on socket 0 00:06:17.119 EAL: Detected lcore 80 as core 27 on socket 0 00:06:17.119 EAL: Detected lcore 81 as core 28 on socket 0 00:06:17.119 EAL: Detected lcore 82 as core 29 on socket 0 00:06:17.119 EAL: Detected lcore 83 as core 30 on socket 0 00:06:17.119 EAL: Detected lcore 84 as core 0 on socket 1 00:06:17.119 EAL: Detected lcore 85 as core 1 on socket 1 00:06:17.119 EAL: Detected lcore 86 as core 2 on socket 1 00:06:17.119 EAL: Detected lcore 87 as core 3 on socket 1 00:06:17.119 EAL: Detected lcore 88 as core 4 on socket 1 00:06:17.119 EAL: Detected lcore 89 as core 5 on socket 1 00:06:17.119 EAL: Detected lcore 90 as core 6 on socket 1 00:06:17.119 EAL: Detected lcore 91 as core 8 on socket 1 00:06:17.119 EAL: Detected lcore 92 as core 9 on socket 1 00:06:17.119 EAL: Detected lcore 93 as core 10 on socket 1 00:06:17.119 EAL: Detected lcore 94 as core 11 on socket 1 00:06:17.119 EAL: Detected lcore 95 as core 12 on socket 1 00:06:17.119 EAL: Detected lcore 96 as core 13 on socket 1 00:06:17.119 EAL: Detected lcore 97 as core 14 on socket 1 00:06:17.119 EAL: Detected lcore 98 as core 16 on socket 1 00:06:17.119 EAL: Detected lcore 99 as core 17 on socket 1 00:06:17.119 EAL: Detected lcore 100 as core 18 on socket 1 00:06:17.119 EAL: Detected lcore 101 as core 19 on socket 1 00:06:17.119 EAL: Detected lcore 102 as core 20 on socket 1 00:06:17.119 EAL: Detected lcore 103 as core 21 on socket 1 00:06:17.119 EAL: Detected lcore 104 as core 22 on socket 1 00:06:17.119 EAL: Detected lcore 105 as core 24 on socket 1 00:06:17.119 EAL: Detected lcore 106 as core 25 on socket 1 00:06:17.119 EAL: Detected lcore 107 as core 26 on socket 1 00:06:17.119 EAL: Detected lcore 108 as core 27 on socket 1 00:06:17.119 EAL: Detected lcore 109 as core 28 on socket 1 00:06:17.119 EAL: Detected lcore 110 as core 29 on socket 1 00:06:17.119 EAL: Detected lcore 111 as core 30 on socket 1 00:06:17.119 EAL: Maximum logical cores by configuration: 128 00:06:17.119 EAL: Detected CPU lcores: 112 00:06:17.119 EAL: Detected NUMA nodes: 2 00:06:17.119 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:17.119 EAL: Detected shared linkage of DPDK 00:06:17.119 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:17.119 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:17.119 EAL: Registered [vdev] bus. 00:06:17.120 EAL: bus.vdev log level changed from disabled to notice 00:06:17.120 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:17.120 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:17.120 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:17.120 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:17.120 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:17.120 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:17.120 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:17.120 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:17.120 EAL: No shared files mode enabled, IPC will be disabled 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Bus pci wants IOVA as 'DC' 00:06:17.120 EAL: Bus vdev wants IOVA as 'DC' 00:06:17.120 EAL: Buses did not request a specific IOVA mode. 00:06:17.120 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:17.120 EAL: Selected IOVA mode 'VA' 00:06:17.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.120 EAL: Probing VFIO support... 00:06:17.120 EAL: IOMMU type 1 (Type 1) is supported 00:06:17.120 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:17.120 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:17.120 EAL: VFIO support initialized 00:06:17.120 EAL: Ask a virtual area of 0x2e000 bytes 00:06:17.120 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:17.120 EAL: Setting up physically contiguous memory... 00:06:17.120 EAL: Setting maximum number of open files to 524288 00:06:17.120 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:17.120 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:17.120 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:17.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.120 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:17.120 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.120 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:17.120 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:17.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.120 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:17.120 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.120 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:17.120 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:17.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.120 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:17.120 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.120 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:17.120 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:17.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.120 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:17.120 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.120 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:17.120 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:17.120 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:17.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.120 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:17.120 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:17.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.120 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:17.120 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:17.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.120 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:17.120 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:17.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.120 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:17.120 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:17.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.120 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:17.120 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:17.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.120 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:17.120 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:17.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.120 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:17.120 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:17.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.120 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:17.120 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:17.120 EAL: Hugepages will be freed exactly as allocated. 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: TSC frequency is ~2500000 KHz 00:06:17.120 EAL: Main lcore 0 is ready (tid=7fbaead0fa00;cpuset=[0]) 00:06:17.120 EAL: Trying to obtain current memory policy. 00:06:17.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.120 EAL: Restoring previous memory policy: 0 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was expanded by 2MB 00:06:17.120 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:06:17.120 EAL: probe driver: 8086:37d2 net_i40e 00:06:17.120 EAL: Not managed by a supported kernel driver, skipped 00:06:17.120 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:06:17.120 EAL: probe driver: 8086:37d2 net_i40e 00:06:17.120 EAL: Not managed by a supported kernel driver, skipped 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:17.120 EAL: Mem event callback 'spdk:(nil)' registered 00:06:17.120 00:06:17.120 00:06:17.120 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.120 http://cunit.sourceforge.net/ 00:06:17.120 00:06:17.120 00:06:17.120 Suite: components_suite 00:06:17.120 Test: vtophys_malloc_test ...passed 00:06:17.120 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:17.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.120 EAL: Restoring previous memory policy: 4 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was expanded by 4MB 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was shrunk by 4MB 00:06:17.120 EAL: Trying to obtain current memory policy. 00:06:17.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.120 EAL: Restoring previous memory policy: 4 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was expanded by 6MB 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was shrunk by 6MB 00:06:17.120 EAL: Trying to obtain current memory policy. 00:06:17.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.120 EAL: Restoring previous memory policy: 4 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was expanded by 10MB 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was shrunk by 10MB 00:06:17.120 EAL: Trying to obtain current memory policy. 00:06:17.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.120 EAL: Restoring previous memory policy: 4 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was expanded by 18MB 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was shrunk by 18MB 00:06:17.120 EAL: Trying to obtain current memory policy. 00:06:17.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.120 EAL: Restoring previous memory policy: 4 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was expanded by 34MB 00:06:17.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.120 EAL: request: mp_malloc_sync 00:06:17.120 EAL: No shared files mode enabled, IPC is disabled 00:06:17.120 EAL: Heap on socket 0 was shrunk by 34MB 00:06:17.120 EAL: Trying to obtain current memory policy. 00:06:17.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.121 EAL: Restoring previous memory policy: 4 00:06:17.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.121 EAL: request: mp_malloc_sync 00:06:17.121 EAL: No shared files mode enabled, IPC is disabled 00:06:17.121 EAL: Heap on socket 0 was expanded by 66MB 00:06:17.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.121 EAL: request: mp_malloc_sync 00:06:17.121 EAL: No shared files mode enabled, IPC is disabled 00:06:17.121 EAL: Heap on socket 0 was shrunk by 66MB 00:06:17.121 EAL: Trying to obtain current memory policy. 00:06:17.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.121 EAL: Restoring previous memory policy: 4 00:06:17.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.121 EAL: request: mp_malloc_sync 00:06:17.121 EAL: No shared files mode enabled, IPC is disabled 00:06:17.121 EAL: Heap on socket 0 was expanded by 130MB 00:06:17.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.379 EAL: request: mp_malloc_sync 00:06:17.379 EAL: No shared files mode enabled, IPC is disabled 00:06:17.379 EAL: Heap on socket 0 was shrunk by 130MB 00:06:17.379 EAL: Trying to obtain current memory policy. 00:06:17.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.379 EAL: Restoring previous memory policy: 4 00:06:17.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.379 EAL: request: mp_malloc_sync 00:06:17.379 EAL: No shared files mode enabled, IPC is disabled 00:06:17.379 EAL: Heap on socket 0 was expanded by 258MB 00:06:17.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.379 EAL: request: mp_malloc_sync 00:06:17.379 EAL: No shared files mode enabled, IPC is disabled 00:06:17.379 EAL: Heap on socket 0 was shrunk by 258MB 00:06:17.379 EAL: Trying to obtain current memory policy. 00:06:17.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.379 EAL: Restoring previous memory policy: 4 00:06:17.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.379 EAL: request: mp_malloc_sync 00:06:17.379 EAL: No shared files mode enabled, IPC is disabled 00:06:17.379 EAL: Heap on socket 0 was expanded by 514MB 00:06:17.637 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.637 EAL: request: mp_malloc_sync 00:06:17.637 EAL: No shared files mode enabled, IPC is disabled 00:06:17.637 EAL: Heap on socket 0 was shrunk by 514MB 00:06:17.637 EAL: Trying to obtain current memory policy. 00:06:17.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.895 EAL: Restoring previous memory policy: 4 00:06:17.895 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.895 EAL: request: mp_malloc_sync 00:06:17.895 EAL: No shared files mode enabled, IPC is disabled 00:06:17.895 EAL: Heap on socket 0 was expanded by 1026MB 00:06:17.895 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.155 EAL: request: mp_malloc_sync 00:06:18.155 EAL: No shared files mode enabled, IPC is disabled 00:06:18.155 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:18.155 passed 00:06:18.155 00:06:18.155 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.155 suites 1 1 n/a 0 0 00:06:18.155 tests 2 2 2 0 0 00:06:18.155 asserts 497 497 497 0 n/a 00:06:18.155 00:06:18.155 Elapsed time = 0.966 seconds 00:06:18.155 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.155 EAL: request: mp_malloc_sync 00:06:18.155 EAL: No shared files mode enabled, IPC is disabled 00:06:18.155 EAL: Heap on socket 0 was shrunk by 2MB 00:06:18.155 EAL: No shared files mode enabled, IPC is disabled 00:06:18.155 EAL: No shared files mode enabled, IPC is disabled 00:06:18.155 EAL: No shared files mode enabled, IPC is disabled 00:06:18.155 00:06:18.155 real 0m1.107s 00:06:18.155 user 0m0.639s 00:06:18.155 sys 0m0.438s 00:06:18.155 20:27:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.155 20:27:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:18.155 ************************************ 00:06:18.155 END TEST env_vtophys 00:06:18.155 ************************************ 00:06:18.155 20:27:06 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:18.155 20:27:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.155 20:27:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.155 20:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.155 ************************************ 00:06:18.155 START TEST env_pci 00:06:18.155 ************************************ 00:06:18.155 20:27:06 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:18.155 00:06:18.155 00:06:18.155 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.155 http://cunit.sourceforge.net/ 00:06:18.155 00:06:18.155 00:06:18.155 Suite: pci 00:06:18.155 Test: pci_hook ...[2024-07-26 20:27:06.676337] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 915927 has claimed it 00:06:18.414 EAL: Cannot find device (10000:00:01.0) 00:06:18.415 EAL: Failed to attach device on primary process 00:06:18.415 passed 00:06:18.415 00:06:18.415 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.415 suites 1 1 n/a 0 0 00:06:18.415 tests 1 1 1 0 0 00:06:18.415 asserts 25 25 25 0 n/a 00:06:18.415 00:06:18.415 Elapsed time = 0.041 seconds 00:06:18.415 00:06:18.415 real 0m0.063s 00:06:18.415 user 0m0.014s 00:06:18.415 sys 0m0.049s 00:06:18.415 20:27:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.415 20:27:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:18.415 ************************************ 00:06:18.415 END TEST env_pci 00:06:18.415 ************************************ 00:06:18.415 20:27:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:18.415 20:27:06 env -- env/env.sh@15 -- # uname 00:06:18.415 20:27:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:18.415 20:27:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:18.415 20:27:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.415 20:27:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:18.415 20:27:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.415 20:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.415 ************************************ 00:06:18.415 START TEST env_dpdk_post_init 00:06:18.415 ************************************ 00:06:18.415 20:27:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.415 EAL: Detected CPU lcores: 112 00:06:18.415 EAL: Detected NUMA nodes: 2 00:06:18.415 EAL: Detected shared linkage of DPDK 00:06:18.415 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.415 EAL: Selected IOVA mode 'VA' 00:06:18.415 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.415 EAL: VFIO support initialized 00:06:18.415 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.415 EAL: Using IOMMU type 1 (Type 1) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:18.674 EAL: Ignore mapping IO port bar(1) 00:06:18.674 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:19.609 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:23.792 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:23.792 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:06:23.792 Starting DPDK initialization... 00:06:23.792 Starting SPDK post initialization... 00:06:23.792 SPDK NVMe probe 00:06:23.792 Attaching to 0000:d8:00.0 00:06:23.792 Attached to 0000:d8:00.0 00:06:23.792 Cleaning up... 00:06:23.792 00:06:23.792 real 0m5.364s 00:06:23.792 user 0m3.955s 00:06:23.792 sys 0m0.468s 00:06:23.792 20:27:12 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.792 20:27:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.792 ************************************ 00:06:23.792 END TEST env_dpdk_post_init 00:06:23.792 ************************************ 00:06:23.792 20:27:12 env -- env/env.sh@26 -- # uname 00:06:23.792 20:27:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:23.792 20:27:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:23.792 20:27:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.792 20:27:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.792 20:27:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.792 ************************************ 00:06:23.792 START TEST env_mem_callbacks 00:06:23.792 ************************************ 00:06:23.792 20:27:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:23.792 EAL: Detected CPU lcores: 112 00:06:23.792 EAL: Detected NUMA nodes: 2 00:06:23.792 EAL: Detected shared linkage of DPDK 00:06:23.793 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:23.793 EAL: Selected IOVA mode 'VA' 00:06:23.793 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.793 EAL: VFIO support initialized 00:06:23.793 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:23.793 00:06:23.793 00:06:23.793 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.793 http://cunit.sourceforge.net/ 00:06:23.793 00:06:23.793 00:06:23.793 Suite: memory 00:06:23.793 Test: test ... 00:06:23.793 register 0x200000200000 2097152 00:06:23.793 malloc 3145728 00:06:23.793 register 0x200000400000 4194304 00:06:23.793 buf 0x200000500000 len 3145728 PASSED 00:06:23.793 malloc 64 00:06:23.793 buf 0x2000004fff40 len 64 PASSED 00:06:23.793 malloc 4194304 00:06:23.793 register 0x200000800000 6291456 00:06:23.793 buf 0x200000a00000 len 4194304 PASSED 00:06:23.793 free 0x200000500000 3145728 00:06:23.793 free 0x2000004fff40 64 00:06:23.793 unregister 0x200000400000 4194304 PASSED 00:06:23.793 free 0x200000a00000 4194304 00:06:23.793 unregister 0x200000800000 6291456 PASSED 00:06:23.793 malloc 8388608 00:06:23.793 register 0x200000400000 10485760 00:06:23.793 buf 0x200000600000 len 8388608 PASSED 00:06:23.793 free 0x200000600000 8388608 00:06:23.793 unregister 0x200000400000 10485760 PASSED 00:06:23.793 passed 00:06:23.793 00:06:23.793 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.793 suites 1 1 n/a 0 0 00:06:23.793 tests 1 1 1 0 0 00:06:23.793 asserts 15 15 15 0 n/a 00:06:23.793 00:06:23.793 Elapsed time = 0.005 seconds 00:06:23.793 00:06:23.793 real 0m0.070s 00:06:23.793 user 0m0.016s 00:06:23.793 sys 0m0.053s 00:06:23.793 20:27:12 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.793 20:27:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:23.793 ************************************ 00:06:23.793 END TEST env_mem_callbacks 00:06:23.793 ************************************ 00:06:24.050 00:06:24.050 real 0m7.253s 00:06:24.050 user 0m4.921s 00:06:24.050 sys 0m1.393s 00:06:24.050 20:27:12 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.050 20:27:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.050 ************************************ 00:06:24.050 END TEST env 00:06:24.050 ************************************ 00:06:24.050 20:27:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:24.050 20:27:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.050 20:27:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.050 20:27:12 -- common/autotest_common.sh@10 -- # set +x 00:06:24.050 ************************************ 00:06:24.050 START TEST rpc 00:06:24.050 ************************************ 00:06:24.050 20:27:12 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:24.050 * Looking for test storage... 00:06:24.050 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:24.050 20:27:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=917114 00:06:24.050 20:27:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.050 20:27:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:24.050 20:27:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 917114 00:06:24.050 20:27:12 rpc -- common/autotest_common.sh@831 -- # '[' -z 917114 ']' 00:06:24.050 20:27:12 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.050 20:27:12 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.051 20:27:12 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.051 20:27:12 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.051 20:27:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.309 [2024-07-26 20:27:12.619115] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:06:24.309 [2024-07-26 20:27:12.619180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917114 ] 00:06:24.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.309 [2024-07-26 20:27:12.702859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.309 [2024-07-26 20:27:12.742803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:24.309 [2024-07-26 20:27:12.742842] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 917114' to capture a snapshot of events at runtime. 00:06:24.309 [2024-07-26 20:27:12.742852] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.309 [2024-07-26 20:27:12.742861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.309 [2024-07-26 20:27:12.742869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid917114 for offline analysis/debug. 00:06:24.309 [2024-07-26 20:27:12.742895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.875 20:27:13 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.875 20:27:13 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.875 20:27:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:24.875 20:27:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:24.875 20:27:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:24.875 20:27:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:24.875 20:27:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.875 20:27:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.875 20:27:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.133 ************************************ 00:06:25.133 START TEST rpc_integrity 00:06:25.133 ************************************ 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:25.133 { 00:06:25.133 "name": "Malloc0", 00:06:25.133 "aliases": [ 00:06:25.133 "39bc87b4-c93f-44a7-98b4-9b38b4764903" 00:06:25.133 ], 00:06:25.133 "product_name": "Malloc disk", 00:06:25.133 "block_size": 512, 00:06:25.133 "num_blocks": 16384, 00:06:25.133 "uuid": "39bc87b4-c93f-44a7-98b4-9b38b4764903", 00:06:25.133 "assigned_rate_limits": { 00:06:25.133 "rw_ios_per_sec": 0, 00:06:25.133 "rw_mbytes_per_sec": 0, 00:06:25.133 "r_mbytes_per_sec": 0, 00:06:25.133 "w_mbytes_per_sec": 0 00:06:25.133 }, 00:06:25.133 "claimed": false, 00:06:25.133 "zoned": false, 00:06:25.133 "supported_io_types": { 00:06:25.133 "read": true, 00:06:25.133 "write": true, 00:06:25.133 "unmap": true, 00:06:25.133 "flush": true, 00:06:25.133 "reset": true, 00:06:25.133 "nvme_admin": false, 00:06:25.133 "nvme_io": false, 00:06:25.133 "nvme_io_md": false, 00:06:25.133 "write_zeroes": true, 00:06:25.133 "zcopy": true, 00:06:25.133 "get_zone_info": false, 00:06:25.133 "zone_management": false, 00:06:25.133 "zone_append": false, 00:06:25.133 "compare": false, 00:06:25.133 "compare_and_write": false, 00:06:25.133 "abort": true, 00:06:25.133 "seek_hole": false, 00:06:25.133 "seek_data": false, 00:06:25.133 "copy": true, 00:06:25.133 "nvme_iov_md": false 00:06:25.133 }, 00:06:25.133 "memory_domains": [ 00:06:25.133 { 00:06:25.133 "dma_device_id": "system", 00:06:25.133 "dma_device_type": 1 00:06:25.133 }, 00:06:25.133 { 00:06:25.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.133 "dma_device_type": 2 00:06:25.133 } 00:06:25.133 ], 00:06:25.133 "driver_specific": {} 00:06:25.133 } 00:06:25.133 ]' 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.133 [2024-07-26 20:27:13.584727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:25.133 [2024-07-26 20:27:13.584758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:25.133 [2024-07-26 20:27:13.584771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2324120 00:06:25.133 [2024-07-26 20:27:13.584780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:25.133 [2024-07-26 20:27:13.585905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:25.133 [2024-07-26 20:27:13.585929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:25.133 Passthru0 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:25.133 { 00:06:25.133 "name": "Malloc0", 00:06:25.133 "aliases": [ 00:06:25.133 "39bc87b4-c93f-44a7-98b4-9b38b4764903" 00:06:25.133 ], 00:06:25.133 "product_name": "Malloc disk", 00:06:25.133 "block_size": 512, 00:06:25.133 "num_blocks": 16384, 00:06:25.133 "uuid": "39bc87b4-c93f-44a7-98b4-9b38b4764903", 00:06:25.133 "assigned_rate_limits": { 00:06:25.133 "rw_ios_per_sec": 0, 00:06:25.133 "rw_mbytes_per_sec": 0, 00:06:25.133 "r_mbytes_per_sec": 0, 00:06:25.133 "w_mbytes_per_sec": 0 00:06:25.133 }, 00:06:25.133 "claimed": true, 00:06:25.133 "claim_type": "exclusive_write", 00:06:25.133 "zoned": false, 00:06:25.133 "supported_io_types": { 00:06:25.133 "read": true, 00:06:25.133 "write": true, 00:06:25.133 "unmap": true, 00:06:25.133 "flush": true, 00:06:25.133 "reset": true, 00:06:25.133 "nvme_admin": false, 00:06:25.133 "nvme_io": false, 00:06:25.133 "nvme_io_md": false, 00:06:25.133 "write_zeroes": true, 00:06:25.133 "zcopy": true, 00:06:25.133 "get_zone_info": false, 00:06:25.133 "zone_management": false, 00:06:25.133 "zone_append": false, 00:06:25.133 "compare": false, 00:06:25.133 "compare_and_write": false, 00:06:25.133 "abort": true, 00:06:25.133 "seek_hole": false, 00:06:25.133 "seek_data": false, 00:06:25.133 "copy": true, 00:06:25.133 "nvme_iov_md": false 00:06:25.133 }, 00:06:25.133 "memory_domains": [ 00:06:25.133 { 00:06:25.133 "dma_device_id": "system", 00:06:25.133 "dma_device_type": 1 00:06:25.133 }, 00:06:25.133 { 00:06:25.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.133 "dma_device_type": 2 00:06:25.133 } 00:06:25.133 ], 00:06:25.133 "driver_specific": {} 00:06:25.133 }, 00:06:25.133 { 00:06:25.133 "name": "Passthru0", 00:06:25.133 "aliases": [ 00:06:25.133 "f2f40492-207a-5b7a-8681-b66dbda4cc22" 00:06:25.133 ], 00:06:25.133 "product_name": "passthru", 00:06:25.133 "block_size": 512, 00:06:25.133 "num_blocks": 16384, 00:06:25.133 "uuid": "f2f40492-207a-5b7a-8681-b66dbda4cc22", 00:06:25.133 "assigned_rate_limits": { 00:06:25.133 "rw_ios_per_sec": 0, 00:06:25.133 "rw_mbytes_per_sec": 0, 00:06:25.133 "r_mbytes_per_sec": 0, 00:06:25.133 "w_mbytes_per_sec": 0 00:06:25.133 }, 00:06:25.133 "claimed": false, 00:06:25.133 "zoned": false, 00:06:25.133 "supported_io_types": { 00:06:25.133 "read": true, 00:06:25.133 "write": true, 00:06:25.133 "unmap": true, 00:06:25.133 "flush": true, 00:06:25.133 "reset": true, 00:06:25.133 "nvme_admin": false, 00:06:25.133 "nvme_io": false, 00:06:25.133 "nvme_io_md": false, 00:06:25.133 "write_zeroes": true, 00:06:25.133 "zcopy": true, 00:06:25.133 "get_zone_info": false, 00:06:25.133 "zone_management": false, 00:06:25.133 "zone_append": false, 00:06:25.133 "compare": false, 00:06:25.133 "compare_and_write": false, 00:06:25.133 "abort": true, 00:06:25.133 "seek_hole": false, 00:06:25.133 "seek_data": false, 00:06:25.133 "copy": true, 00:06:25.133 "nvme_iov_md": false 00:06:25.133 }, 00:06:25.133 "memory_domains": [ 00:06:25.133 { 00:06:25.133 "dma_device_id": "system", 00:06:25.133 "dma_device_type": 1 00:06:25.133 }, 00:06:25.133 { 00:06:25.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.133 "dma_device_type": 2 00:06:25.133 } 00:06:25.133 ], 00:06:25.133 "driver_specific": { 00:06:25.133 "passthru": { 00:06:25.133 "name": "Passthru0", 00:06:25.133 "base_bdev_name": "Malloc0" 00:06:25.133 } 00:06:25.133 } 00:06:25.133 } 00:06:25.133 ]' 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.133 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.133 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.391 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.391 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:25.391 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.391 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.391 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.391 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:25.391 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:25.391 20:27:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:25.391 00:06:25.391 real 0m0.298s 00:06:25.391 user 0m0.188s 00:06:25.391 sys 0m0.046s 00:06:25.391 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.391 20:27:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.391 ************************************ 00:06:25.391 END TEST rpc_integrity 00:06:25.391 ************************************ 00:06:25.391 20:27:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:25.391 20:27:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.391 20:27:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.391 20:27:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.391 ************************************ 00:06:25.391 START TEST rpc_plugins 00:06:25.391 ************************************ 00:06:25.391 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:25.391 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:25.391 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.391 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.391 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.391 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:25.391 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:25.391 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.391 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.391 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.391 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:25.391 { 00:06:25.391 "name": "Malloc1", 00:06:25.391 "aliases": [ 00:06:25.391 "085d676f-32a6-4614-98b3-d5b51f7f0c03" 00:06:25.391 ], 00:06:25.391 "product_name": "Malloc disk", 00:06:25.391 "block_size": 4096, 00:06:25.392 "num_blocks": 256, 00:06:25.392 "uuid": "085d676f-32a6-4614-98b3-d5b51f7f0c03", 00:06:25.392 "assigned_rate_limits": { 00:06:25.392 "rw_ios_per_sec": 0, 00:06:25.392 "rw_mbytes_per_sec": 0, 00:06:25.392 "r_mbytes_per_sec": 0, 00:06:25.392 "w_mbytes_per_sec": 0 00:06:25.392 }, 00:06:25.392 "claimed": false, 00:06:25.392 "zoned": false, 00:06:25.392 "supported_io_types": { 00:06:25.392 "read": true, 00:06:25.392 "write": true, 00:06:25.392 "unmap": true, 00:06:25.392 "flush": true, 00:06:25.392 "reset": true, 00:06:25.392 "nvme_admin": false, 00:06:25.392 "nvme_io": false, 00:06:25.392 "nvme_io_md": false, 00:06:25.392 "write_zeroes": true, 00:06:25.392 "zcopy": true, 00:06:25.392 "get_zone_info": false, 00:06:25.392 "zone_management": false, 00:06:25.392 "zone_append": false, 00:06:25.392 "compare": false, 00:06:25.392 "compare_and_write": false, 00:06:25.392 "abort": true, 00:06:25.392 "seek_hole": false, 00:06:25.392 "seek_data": false, 00:06:25.392 "copy": true, 00:06:25.392 "nvme_iov_md": false 00:06:25.392 }, 00:06:25.392 "memory_domains": [ 00:06:25.392 { 00:06:25.392 "dma_device_id": "system", 00:06:25.392 "dma_device_type": 1 00:06:25.392 }, 00:06:25.392 { 00:06:25.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.392 "dma_device_type": 2 00:06:25.392 } 00:06:25.392 ], 00:06:25.392 "driver_specific": {} 00:06:25.392 } 00:06:25.392 ]' 00:06:25.392 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:25.392 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:25.392 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:25.392 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.392 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.392 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.392 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:25.392 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.392 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.392 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.392 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:25.392 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:25.649 20:27:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:25.649 00:06:25.649 real 0m0.139s 00:06:25.649 user 0m0.079s 00:06:25.649 sys 0m0.026s 00:06:25.649 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.649 20:27:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:25.649 ************************************ 00:06:25.649 END TEST rpc_plugins 00:06:25.649 ************************************ 00:06:25.649 20:27:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:25.649 20:27:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.649 20:27:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.649 20:27:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.649 ************************************ 00:06:25.649 START TEST rpc_trace_cmd_test 00:06:25.649 ************************************ 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:25.649 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid917114", 00:06:25.649 "tpoint_group_mask": "0x8", 00:06:25.649 "iscsi_conn": { 00:06:25.649 "mask": "0x2", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "scsi": { 00:06:25.649 "mask": "0x4", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "bdev": { 00:06:25.649 "mask": "0x8", 00:06:25.649 "tpoint_mask": "0xffffffffffffffff" 00:06:25.649 }, 00:06:25.649 "nvmf_rdma": { 00:06:25.649 "mask": "0x10", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "nvmf_tcp": { 00:06:25.649 "mask": "0x20", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "ftl": { 00:06:25.649 "mask": "0x40", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "blobfs": { 00:06:25.649 "mask": "0x80", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "dsa": { 00:06:25.649 "mask": "0x200", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "thread": { 00:06:25.649 "mask": "0x400", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "nvme_pcie": { 00:06:25.649 "mask": "0x800", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "iaa": { 00:06:25.649 "mask": "0x1000", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "nvme_tcp": { 00:06:25.649 "mask": "0x2000", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "bdev_nvme": { 00:06:25.649 "mask": "0x4000", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 }, 00:06:25.649 "sock": { 00:06:25.649 "mask": "0x8000", 00:06:25.649 "tpoint_mask": "0x0" 00:06:25.649 } 00:06:25.649 }' 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:25.649 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:25.906 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:25.906 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:25.906 20:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:25.906 00:06:25.906 real 0m0.224s 00:06:25.906 user 0m0.184s 00:06:25.906 sys 0m0.033s 00:06:25.906 20:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.906 20:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:25.906 ************************************ 00:06:25.906 END TEST rpc_trace_cmd_test 00:06:25.906 ************************************ 00:06:25.906 20:27:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:25.906 20:27:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:25.906 20:27:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:25.906 20:27:14 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.906 20:27:14 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.906 20:27:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.906 ************************************ 00:06:25.906 START TEST rpc_daemon_integrity 00:06:25.906 ************************************ 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:25.906 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:25.907 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.907 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.907 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.907 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:25.907 { 00:06:25.907 "name": "Malloc2", 00:06:25.907 "aliases": [ 00:06:25.907 "5a15dc78-d698-4e85-83a8-7fd4869b01eb" 00:06:25.907 ], 00:06:25.907 "product_name": "Malloc disk", 00:06:25.907 "block_size": 512, 00:06:25.907 "num_blocks": 16384, 00:06:25.907 "uuid": "5a15dc78-d698-4e85-83a8-7fd4869b01eb", 00:06:25.907 "assigned_rate_limits": { 00:06:25.907 "rw_ios_per_sec": 0, 00:06:25.907 "rw_mbytes_per_sec": 0, 00:06:25.907 "r_mbytes_per_sec": 0, 00:06:25.907 "w_mbytes_per_sec": 0 00:06:25.907 }, 00:06:25.907 "claimed": false, 00:06:25.907 "zoned": false, 00:06:25.907 "supported_io_types": { 00:06:25.907 "read": true, 00:06:25.907 "write": true, 00:06:25.907 "unmap": true, 00:06:25.907 "flush": true, 00:06:25.907 "reset": true, 00:06:25.907 "nvme_admin": false, 00:06:25.907 "nvme_io": false, 00:06:25.907 "nvme_io_md": false, 00:06:25.907 "write_zeroes": true, 00:06:25.907 "zcopy": true, 00:06:25.907 "get_zone_info": false, 00:06:25.907 "zone_management": false, 00:06:25.907 "zone_append": false, 00:06:25.907 "compare": false, 00:06:25.907 "compare_and_write": false, 00:06:25.907 "abort": true, 00:06:25.907 "seek_hole": false, 00:06:25.907 "seek_data": false, 00:06:25.907 "copy": true, 00:06:25.907 "nvme_iov_md": false 00:06:25.907 }, 00:06:25.907 "memory_domains": [ 00:06:25.907 { 00:06:25.907 "dma_device_id": "system", 00:06:25.907 "dma_device_type": 1 00:06:25.907 }, 00:06:25.907 { 00:06:25.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.907 "dma_device_type": 2 00:06:25.907 } 00:06:25.907 ], 00:06:25.907 "driver_specific": {} 00:06:25.907 } 00:06:25.907 ]' 00:06:25.907 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.165 [2024-07-26 20:27:14.487172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:26.165 [2024-07-26 20:27:14.487200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.165 [2024-07-26 20:27:14.487213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2317440 00:06:26.165 [2024-07-26 20:27:14.487222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.165 [2024-07-26 20:27:14.488116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.165 [2024-07-26 20:27:14.488139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:26.165 Passthru0 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:26.165 { 00:06:26.165 "name": "Malloc2", 00:06:26.165 "aliases": [ 00:06:26.165 "5a15dc78-d698-4e85-83a8-7fd4869b01eb" 00:06:26.165 ], 00:06:26.165 "product_name": "Malloc disk", 00:06:26.165 "block_size": 512, 00:06:26.165 "num_blocks": 16384, 00:06:26.165 "uuid": "5a15dc78-d698-4e85-83a8-7fd4869b01eb", 00:06:26.165 "assigned_rate_limits": { 00:06:26.165 "rw_ios_per_sec": 0, 00:06:26.165 "rw_mbytes_per_sec": 0, 00:06:26.165 "r_mbytes_per_sec": 0, 00:06:26.165 "w_mbytes_per_sec": 0 00:06:26.165 }, 00:06:26.165 "claimed": true, 00:06:26.165 "claim_type": "exclusive_write", 00:06:26.165 "zoned": false, 00:06:26.165 "supported_io_types": { 00:06:26.165 "read": true, 00:06:26.165 "write": true, 00:06:26.165 "unmap": true, 00:06:26.165 "flush": true, 00:06:26.165 "reset": true, 00:06:26.165 "nvme_admin": false, 00:06:26.165 "nvme_io": false, 00:06:26.165 "nvme_io_md": false, 00:06:26.165 "write_zeroes": true, 00:06:26.165 "zcopy": true, 00:06:26.165 "get_zone_info": false, 00:06:26.165 "zone_management": false, 00:06:26.165 "zone_append": false, 00:06:26.165 "compare": false, 00:06:26.165 "compare_and_write": false, 00:06:26.165 "abort": true, 00:06:26.165 "seek_hole": false, 00:06:26.165 "seek_data": false, 00:06:26.165 "copy": true, 00:06:26.165 "nvme_iov_md": false 00:06:26.165 }, 00:06:26.165 "memory_domains": [ 00:06:26.165 { 00:06:26.165 "dma_device_id": "system", 00:06:26.165 "dma_device_type": 1 00:06:26.165 }, 00:06:26.165 { 00:06:26.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.165 "dma_device_type": 2 00:06:26.165 } 00:06:26.165 ], 00:06:26.165 "driver_specific": {} 00:06:26.165 }, 00:06:26.165 { 00:06:26.165 "name": "Passthru0", 00:06:26.165 "aliases": [ 00:06:26.165 "0111a0da-e7d5-56d4-bcb0-65734a5ee21b" 00:06:26.165 ], 00:06:26.165 "product_name": "passthru", 00:06:26.165 "block_size": 512, 00:06:26.165 "num_blocks": 16384, 00:06:26.165 "uuid": "0111a0da-e7d5-56d4-bcb0-65734a5ee21b", 00:06:26.165 "assigned_rate_limits": { 00:06:26.165 "rw_ios_per_sec": 0, 00:06:26.165 "rw_mbytes_per_sec": 0, 00:06:26.165 "r_mbytes_per_sec": 0, 00:06:26.165 "w_mbytes_per_sec": 0 00:06:26.165 }, 00:06:26.165 "claimed": false, 00:06:26.165 "zoned": false, 00:06:26.165 "supported_io_types": { 00:06:26.165 "read": true, 00:06:26.165 "write": true, 00:06:26.165 "unmap": true, 00:06:26.165 "flush": true, 00:06:26.165 "reset": true, 00:06:26.165 "nvme_admin": false, 00:06:26.165 "nvme_io": false, 00:06:26.165 "nvme_io_md": false, 00:06:26.165 "write_zeroes": true, 00:06:26.165 "zcopy": true, 00:06:26.165 "get_zone_info": false, 00:06:26.165 "zone_management": false, 00:06:26.165 "zone_append": false, 00:06:26.165 "compare": false, 00:06:26.165 "compare_and_write": false, 00:06:26.165 "abort": true, 00:06:26.165 "seek_hole": false, 00:06:26.165 "seek_data": false, 00:06:26.165 "copy": true, 00:06:26.165 "nvme_iov_md": false 00:06:26.165 }, 00:06:26.165 "memory_domains": [ 00:06:26.165 { 00:06:26.165 "dma_device_id": "system", 00:06:26.165 "dma_device_type": 1 00:06:26.165 }, 00:06:26.165 { 00:06:26.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.165 "dma_device_type": 2 00:06:26.165 } 00:06:26.165 ], 00:06:26.165 "driver_specific": { 00:06:26.165 "passthru": { 00:06:26.165 "name": "Passthru0", 00:06:26.165 "base_bdev_name": "Malloc2" 00:06:26.165 } 00:06:26.165 } 00:06:26.165 } 00:06:26.165 ]' 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:26.165 20:27:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:26.165 00:06:26.165 real 0m0.280s 00:06:26.165 user 0m0.176s 00:06:26.165 sys 0m0.046s 00:06:26.166 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.166 20:27:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.166 ************************************ 00:06:26.166 END TEST rpc_daemon_integrity 00:06:26.166 ************************************ 00:06:26.166 20:27:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:26.166 20:27:14 rpc -- rpc/rpc.sh@84 -- # killprocess 917114 00:06:26.166 20:27:14 rpc -- common/autotest_common.sh@950 -- # '[' -z 917114 ']' 00:06:26.166 20:27:14 rpc -- common/autotest_common.sh@954 -- # kill -0 917114 00:06:26.166 20:27:14 rpc -- common/autotest_common.sh@955 -- # uname 00:06:26.166 20:27:14 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.166 20:27:14 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 917114 00:06:26.423 20:27:14 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.423 20:27:14 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.423 20:27:14 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 917114' 00:06:26.423 killing process with pid 917114 00:06:26.423 20:27:14 rpc -- common/autotest_common.sh@969 -- # kill 917114 00:06:26.423 20:27:14 rpc -- common/autotest_common.sh@974 -- # wait 917114 00:06:26.680 00:06:26.680 real 0m2.558s 00:06:26.680 user 0m3.241s 00:06:26.680 sys 0m0.814s 00:06:26.680 20:27:15 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.680 20:27:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.680 ************************************ 00:06:26.680 END TEST rpc 00:06:26.680 ************************************ 00:06:26.680 20:27:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:26.680 20:27:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.680 20:27:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.680 20:27:15 -- common/autotest_common.sh@10 -- # set +x 00:06:26.681 ************************************ 00:06:26.681 START TEST skip_rpc 00:06:26.681 ************************************ 00:06:26.681 20:27:15 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:26.681 * Looking for test storage... 00:06:26.681 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:26.681 20:27:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:26.681 20:27:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:26.681 20:27:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:26.681 20:27:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.681 20:27:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.681 20:27:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.938 ************************************ 00:06:26.938 START TEST skip_rpc 00:06:26.938 ************************************ 00:06:26.938 20:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:26.938 20:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=917684 00:06:26.938 20:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.938 20:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:26.938 20:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:26.938 [2024-07-26 20:27:15.295767] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:06:26.938 [2024-07-26 20:27:15.295814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917684 ] 00:06:26.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.938 [2024-07-26 20:27:15.380327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.938 [2024-07-26 20:27:15.418968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:32.273 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 917684 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 917684 ']' 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 917684 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 917684 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 917684' 00:06:32.274 killing process with pid 917684 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 917684 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 917684 00:06:32.274 00:06:32.274 real 0m5.364s 00:06:32.274 user 0m5.106s 00:06:32.274 sys 0m0.299s 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.274 20:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.274 ************************************ 00:06:32.274 END TEST skip_rpc 00:06:32.274 ************************************ 00:06:32.274 20:27:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:32.274 20:27:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.274 20:27:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.274 20:27:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.274 ************************************ 00:06:32.274 START TEST skip_rpc_with_json 00:06:32.274 ************************************ 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=918653 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 918653 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 918653 ']' 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.274 20:27:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.274 [2024-07-26 20:27:20.741794] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:06:32.274 [2024-07-26 20:27:20.741841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid918653 ] 00:06:32.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.533 [2024-07-26 20:27:20.826908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.533 [2024-07-26 20:27:20.865154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.101 [2024-07-26 20:27:21.533350] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:33.101 request: 00:06:33.101 { 00:06:33.101 "trtype": "tcp", 00:06:33.101 "method": "nvmf_get_transports", 00:06:33.101 "req_id": 1 00:06:33.101 } 00:06:33.101 Got JSON-RPC error response 00:06:33.101 response: 00:06:33.101 { 00:06:33.101 "code": -19, 00:06:33.101 "message": "No such device" 00:06:33.101 } 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.101 [2024-07-26 20:27:21.545452] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.101 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.361 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.361 20:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:33.361 { 00:06:33.361 "subsystems": [ 00:06:33.361 { 00:06:33.361 "subsystem": "keyring", 00:06:33.361 "config": [] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "iobuf", 00:06:33.361 "config": [ 00:06:33.361 { 00:06:33.361 "method": "iobuf_set_options", 00:06:33.361 "params": { 00:06:33.361 "small_pool_count": 8192, 00:06:33.361 "large_pool_count": 1024, 00:06:33.361 "small_bufsize": 8192, 00:06:33.361 "large_bufsize": 135168 00:06:33.361 } 00:06:33.361 } 00:06:33.361 ] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "sock", 00:06:33.361 "config": [ 00:06:33.361 { 00:06:33.361 "method": "sock_set_default_impl", 00:06:33.361 "params": { 00:06:33.361 "impl_name": "posix" 00:06:33.361 } 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "method": "sock_impl_set_options", 00:06:33.361 "params": { 00:06:33.361 "impl_name": "ssl", 00:06:33.361 "recv_buf_size": 4096, 00:06:33.361 "send_buf_size": 4096, 00:06:33.361 "enable_recv_pipe": true, 00:06:33.361 "enable_quickack": false, 00:06:33.361 "enable_placement_id": 0, 00:06:33.361 "enable_zerocopy_send_server": true, 00:06:33.361 "enable_zerocopy_send_client": false, 00:06:33.361 "zerocopy_threshold": 0, 00:06:33.361 "tls_version": 0, 00:06:33.361 "enable_ktls": false 00:06:33.361 } 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "method": "sock_impl_set_options", 00:06:33.361 "params": { 00:06:33.361 "impl_name": "posix", 00:06:33.361 "recv_buf_size": 2097152, 00:06:33.361 "send_buf_size": 2097152, 00:06:33.361 "enable_recv_pipe": true, 00:06:33.361 "enable_quickack": false, 00:06:33.361 "enable_placement_id": 0, 00:06:33.361 "enable_zerocopy_send_server": true, 00:06:33.361 "enable_zerocopy_send_client": false, 00:06:33.361 "zerocopy_threshold": 0, 00:06:33.361 "tls_version": 0, 00:06:33.361 "enable_ktls": false 00:06:33.361 } 00:06:33.361 } 00:06:33.361 ] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "vmd", 00:06:33.361 "config": [] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "accel", 00:06:33.361 "config": [ 00:06:33.361 { 00:06:33.361 "method": "accel_set_options", 00:06:33.361 "params": { 00:06:33.361 "small_cache_size": 128, 00:06:33.361 "large_cache_size": 16, 00:06:33.361 "task_count": 2048, 00:06:33.361 "sequence_count": 2048, 00:06:33.361 "buf_count": 2048 00:06:33.361 } 00:06:33.361 } 00:06:33.361 ] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "bdev", 00:06:33.361 "config": [ 00:06:33.361 { 00:06:33.361 "method": "bdev_set_options", 00:06:33.361 "params": { 00:06:33.361 "bdev_io_pool_size": 65535, 00:06:33.361 "bdev_io_cache_size": 256, 00:06:33.361 "bdev_auto_examine": true, 00:06:33.361 "iobuf_small_cache_size": 128, 00:06:33.361 "iobuf_large_cache_size": 16 00:06:33.361 } 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "method": "bdev_raid_set_options", 00:06:33.361 "params": { 00:06:33.361 "process_window_size_kb": 1024, 00:06:33.361 "process_max_bandwidth_mb_sec": 0 00:06:33.361 } 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "method": "bdev_iscsi_set_options", 00:06:33.361 "params": { 00:06:33.361 "timeout_sec": 30 00:06:33.361 } 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "method": "bdev_nvme_set_options", 00:06:33.361 "params": { 00:06:33.361 "action_on_timeout": "none", 00:06:33.361 "timeout_us": 0, 00:06:33.361 "timeout_admin_us": 0, 00:06:33.361 "keep_alive_timeout_ms": 10000, 00:06:33.361 "arbitration_burst": 0, 00:06:33.361 "low_priority_weight": 0, 00:06:33.361 "medium_priority_weight": 0, 00:06:33.361 "high_priority_weight": 0, 00:06:33.361 "nvme_adminq_poll_period_us": 10000, 00:06:33.361 "nvme_ioq_poll_period_us": 0, 00:06:33.361 "io_queue_requests": 0, 00:06:33.361 "delay_cmd_submit": true, 00:06:33.361 "transport_retry_count": 4, 00:06:33.361 "bdev_retry_count": 3, 00:06:33.361 "transport_ack_timeout": 0, 00:06:33.361 "ctrlr_loss_timeout_sec": 0, 00:06:33.361 "reconnect_delay_sec": 0, 00:06:33.361 "fast_io_fail_timeout_sec": 0, 00:06:33.361 "disable_auto_failback": false, 00:06:33.361 "generate_uuids": false, 00:06:33.361 "transport_tos": 0, 00:06:33.361 "nvme_error_stat": false, 00:06:33.361 "rdma_srq_size": 0, 00:06:33.361 "io_path_stat": false, 00:06:33.361 "allow_accel_sequence": false, 00:06:33.361 "rdma_max_cq_size": 0, 00:06:33.361 "rdma_cm_event_timeout_ms": 0, 00:06:33.361 "dhchap_digests": [ 00:06:33.361 "sha256", 00:06:33.361 "sha384", 00:06:33.361 "sha512" 00:06:33.361 ], 00:06:33.361 "dhchap_dhgroups": [ 00:06:33.361 "null", 00:06:33.361 "ffdhe2048", 00:06:33.361 "ffdhe3072", 00:06:33.361 "ffdhe4096", 00:06:33.361 "ffdhe6144", 00:06:33.361 "ffdhe8192" 00:06:33.361 ] 00:06:33.361 } 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "method": "bdev_nvme_set_hotplug", 00:06:33.361 "params": { 00:06:33.361 "period_us": 100000, 00:06:33.361 "enable": false 00:06:33.361 } 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "method": "bdev_wait_for_examine" 00:06:33.361 } 00:06:33.361 ] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "scsi", 00:06:33.361 "config": null 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "scheduler", 00:06:33.361 "config": [ 00:06:33.361 { 00:06:33.361 "method": "framework_set_scheduler", 00:06:33.361 "params": { 00:06:33.361 "name": "static" 00:06:33.361 } 00:06:33.361 } 00:06:33.361 ] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "vhost_scsi", 00:06:33.361 "config": [] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "vhost_blk", 00:06:33.361 "config": [] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "ublk", 00:06:33.361 "config": [] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "nbd", 00:06:33.361 "config": [] 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "subsystem": "nvmf", 00:06:33.361 "config": [ 00:06:33.361 { 00:06:33.361 "method": "nvmf_set_config", 00:06:33.361 "params": { 00:06:33.361 "discovery_filter": "match_any", 00:06:33.361 "admin_cmd_passthru": { 00:06:33.361 "identify_ctrlr": false 00:06:33.361 } 00:06:33.361 } 00:06:33.361 }, 00:06:33.361 { 00:06:33.361 "method": "nvmf_set_max_subsystems", 00:06:33.361 "params": { 00:06:33.362 "max_subsystems": 1024 00:06:33.362 } 00:06:33.362 }, 00:06:33.362 { 00:06:33.362 "method": "nvmf_set_crdt", 00:06:33.362 "params": { 00:06:33.362 "crdt1": 0, 00:06:33.362 "crdt2": 0, 00:06:33.362 "crdt3": 0 00:06:33.362 } 00:06:33.362 }, 00:06:33.362 { 00:06:33.362 "method": "nvmf_create_transport", 00:06:33.362 "params": { 00:06:33.362 "trtype": "TCP", 00:06:33.362 "max_queue_depth": 128, 00:06:33.362 "max_io_qpairs_per_ctrlr": 127, 00:06:33.362 "in_capsule_data_size": 4096, 00:06:33.362 "max_io_size": 131072, 00:06:33.362 "io_unit_size": 131072, 00:06:33.362 "max_aq_depth": 128, 00:06:33.362 "num_shared_buffers": 511, 00:06:33.362 "buf_cache_size": 4294967295, 00:06:33.362 "dif_insert_or_strip": false, 00:06:33.362 "zcopy": false, 00:06:33.362 "c2h_success": true, 00:06:33.362 "sock_priority": 0, 00:06:33.362 "abort_timeout_sec": 1, 00:06:33.362 "ack_timeout": 0, 00:06:33.362 "data_wr_pool_size": 0 00:06:33.362 } 00:06:33.362 } 00:06:33.362 ] 00:06:33.362 }, 00:06:33.362 { 00:06:33.362 "subsystem": "iscsi", 00:06:33.362 "config": [ 00:06:33.362 { 00:06:33.362 "method": "iscsi_set_options", 00:06:33.362 "params": { 00:06:33.362 "node_base": "iqn.2016-06.io.spdk", 00:06:33.362 "max_sessions": 128, 00:06:33.362 "max_connections_per_session": 2, 00:06:33.362 "max_queue_depth": 64, 00:06:33.362 "default_time2wait": 2, 00:06:33.362 "default_time2retain": 20, 00:06:33.362 "first_burst_length": 8192, 00:06:33.362 "immediate_data": true, 00:06:33.362 "allow_duplicated_isid": false, 00:06:33.362 "error_recovery_level": 0, 00:06:33.362 "nop_timeout": 60, 00:06:33.362 "nop_in_interval": 30, 00:06:33.362 "disable_chap": false, 00:06:33.362 "require_chap": false, 00:06:33.362 "mutual_chap": false, 00:06:33.362 "chap_group": 0, 00:06:33.362 "max_large_datain_per_connection": 64, 00:06:33.362 "max_r2t_per_connection": 4, 00:06:33.362 "pdu_pool_size": 36864, 00:06:33.362 "immediate_data_pool_size": 16384, 00:06:33.362 "data_out_pool_size": 2048 00:06:33.362 } 00:06:33.362 } 00:06:33.362 ] 00:06:33.362 } 00:06:33.362 ] 00:06:33.362 } 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 918653 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 918653 ']' 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 918653 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 918653 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 918653' 00:06:33.362 killing process with pid 918653 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 918653 00:06:33.362 20:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 918653 00:06:33.621 20:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=918929 00:06:33.621 20:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:33.621 20:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 918929 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 918929 ']' 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 918929 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 918929 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 918929' 00:06:38.899 killing process with pid 918929 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 918929 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 918929 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:38.899 00:06:38.899 real 0m6.744s 00:06:38.899 user 0m6.495s 00:06:38.899 sys 0m0.698s 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.899 20:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:38.899 ************************************ 00:06:38.899 END TEST skip_rpc_with_json 00:06:38.899 ************************************ 00:06:39.158 20:27:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:39.158 20:27:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.158 20:27:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.158 20:27:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.158 ************************************ 00:06:39.158 START TEST skip_rpc_with_delay 00:06:39.158 ************************************ 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.158 [2024-07-26 20:27:27.572169] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:39.158 [2024-07-26 20:27:27.572238] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.158 00:06:39.158 real 0m0.071s 00:06:39.158 user 0m0.037s 00:06:39.158 sys 0m0.033s 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.158 20:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:39.158 ************************************ 00:06:39.158 END TEST skip_rpc_with_delay 00:06:39.158 ************************************ 00:06:39.158 20:27:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:39.158 20:27:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:39.158 20:27:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:39.158 20:27:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.158 20:27:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.158 20:27:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.158 ************************************ 00:06:39.158 START TEST exit_on_failed_rpc_init 00:06:39.158 ************************************ 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=919935 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 919935 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 919935 ']' 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.158 20:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:39.417 [2024-07-26 20:27:27.715862] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:06:39.417 [2024-07-26 20:27:27.715909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919935 ] 00:06:39.418 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.418 [2024-07-26 20:27:27.798350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.418 [2024-07-26 20:27:27.836978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.985 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.985 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:39.986 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.245 [2024-07-26 20:27:28.558082] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:06:40.245 [2024-07-26 20:27:28.558138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920064 ] 00:06:40.245 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.245 [2024-07-26 20:27:28.640659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.245 [2024-07-26 20:27:28.679109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.245 [2024-07-26 20:27:28.679196] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:40.245 [2024-07-26 20:27:28.679208] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:40.245 [2024-07-26 20:27:28.679216] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 919935 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 919935 ']' 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 919935 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.245 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 919935 00:06:40.504 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.504 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.504 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 919935' 00:06:40.504 killing process with pid 919935 00:06:40.504 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 919935 00:06:40.505 20:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 919935 00:06:40.764 00:06:40.764 real 0m1.433s 00:06:40.764 user 0m1.564s 00:06:40.764 sys 0m0.477s 00:06:40.764 20:27:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.764 20:27:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:40.764 ************************************ 00:06:40.764 END TEST exit_on_failed_rpc_init 00:06:40.764 ************************************ 00:06:40.764 20:27:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:40.764 00:06:40.764 real 0m14.048s 00:06:40.764 user 0m13.344s 00:06:40.764 sys 0m1.836s 00:06:40.764 20:27:29 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.764 20:27:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.764 ************************************ 00:06:40.764 END TEST skip_rpc 00:06:40.764 ************************************ 00:06:40.764 20:27:29 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:40.764 20:27:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.764 20:27:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.764 20:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:40.764 ************************************ 00:06:40.764 START TEST rpc_client 00:06:40.764 ************************************ 00:06:40.764 20:27:29 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:40.764 * Looking for test storage... 00:06:40.764 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:40.764 20:27:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:41.024 OK 00:06:41.024 20:27:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:41.024 00:06:41.024 real 0m0.113s 00:06:41.024 user 0m0.046s 00:06:41.024 sys 0m0.076s 00:06:41.024 20:27:29 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.024 20:27:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:41.024 ************************************ 00:06:41.024 END TEST rpc_client 00:06:41.024 ************************************ 00:06:41.024 20:27:29 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:41.024 20:27:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.024 20:27:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.024 20:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:41.024 ************************************ 00:06:41.024 START TEST json_config 00:06:41.024 ************************************ 00:06:41.024 20:27:29 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:41.024 20:27:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.024 20:27:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.024 20:27:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.024 20:27:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.024 20:27:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.024 20:27:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.024 20:27:29 json_config -- paths/export.sh@5 -- # export PATH 00:06:41.024 20:27:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@47 -- # : 0 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.024 20:27:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:41.024 20:27:29 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:41.024 INFO: JSON configuration test init 00:06:41.025 20:27:29 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:41.025 20:27:29 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.025 20:27:29 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.025 20:27:29 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:41.025 20:27:29 json_config -- json_config/common.sh@9 -- # local app=target 00:06:41.025 20:27:29 json_config -- json_config/common.sh@10 -- # shift 00:06:41.025 20:27:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.025 20:27:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.025 20:27:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.025 20:27:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.025 20:27:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.025 20:27:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=920433 00:06:41.025 20:27:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.025 Waiting for target to run... 00:06:41.025 20:27:29 json_config -- json_config/common.sh@25 -- # waitforlisten 920433 /var/tmp/spdk_tgt.sock 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@831 -- # '[' -z 920433 ']' 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.025 20:27:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.025 20:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.025 [2024-07-26 20:27:29.575431] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:06:41.025 [2024-07-26 20:27:29.575487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920433 ] 00:06:41.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.543 [2024-07-26 20:27:29.877714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.543 [2024-07-26 20:27:29.899080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.111 20:27:30 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.111 20:27:30 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:42.111 20:27:30 json_config -- json_config/common.sh@26 -- # echo '' 00:06:42.111 00:06:42.111 20:27:30 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:42.111 20:27:30 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:42.111 20:27:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.111 20:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.111 20:27:30 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:42.111 20:27:30 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:42.111 20:27:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.112 20:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.112 20:27:30 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:42.112 20:27:30 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:42.112 20:27:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:45.400 20:27:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.400 20:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:45.400 20:27:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@51 -- # sort 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:45.400 20:27:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.400 20:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:45.400 20:27:33 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:45.401 20:27:33 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:45.401 20:27:33 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:45.401 20:27:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.401 20:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.401 20:27:33 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:45.401 20:27:33 json_config -- json_config/json_config.sh@237 -- # [[ rdma == \r\d\m\a ]] 00:06:45.401 20:27:33 json_config -- json_config/json_config.sh@238 -- # TEST_TRANSPORT=rdma 00:06:45.401 20:27:33 json_config -- json_config/json_config.sh@238 -- # nvmftestinit 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.401 20:27:33 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:45.401 20:27:33 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.401 20:27:33 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.401 20:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@296 -- # e810=() 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@297 -- # x722=() 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@298 -- # mlx=() 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:53.525 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:53.525 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.525 20:27:41 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:53.526 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:53.526 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@58 -- # uname 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:53.526 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:53.526 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:53.526 altname enp217s0f0np0 00:06:53.526 altname ens818f0np0 00:06:53.526 inet 192.168.100.8/24 scope global mlx_0_0 00:06:53.526 valid_lft forever preferred_lft forever 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:53.526 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:53.526 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:53.526 altname enp217s0f1np1 00:06:53.526 altname ens818f1np1 00:06:53.526 inet 192.168.100.9/24 scope global mlx_0_1 00:06:53.526 valid_lft forever preferred_lft forever 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@422 -- # return 0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:53.526 192.168.100.9' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:53.526 192.168.100.9' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@457 -- # head -n 1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:53.526 192.168.100.9' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@458 -- # head -n 1 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:53.526 20:27:41 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:53.526 20:27:41 json_config -- json_config/json_config.sh@241 -- # [[ -z 192.168.100.8 ]] 00:06:53.526 20:27:41 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:53.526 20:27:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:53.785 MallocForNvmf0 00:06:53.785 20:27:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:53.786 20:27:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:53.786 MallocForNvmf1 00:06:53.786 20:27:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:53.786 20:27:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:54.045 [2024-07-26 20:27:42.441362] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:54.045 [2024-07-26 20:27:42.469711] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10d75d0/0x1104640) succeed. 00:06:54.045 [2024-07-26 20:27:42.480960] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10d97c0/0x1164600) succeed. 00:06:54.045 20:27:42 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.045 20:27:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.303 20:27:42 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:54.303 20:27:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:54.303 20:27:42 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:54.561 20:27:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:54.561 20:27:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:54.561 20:27:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:54.833 [2024-07-26 20:27:43.180905] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:54.833 20:27:43 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:54.833 20:27:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:54.833 20:27:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.833 20:27:43 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:54.833 20:27:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:54.833 20:27:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.833 20:27:43 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:54.833 20:27:43 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:54.833 20:27:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:55.098 MallocBdevForConfigChangeCheck 00:06:55.098 20:27:43 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:55.098 20:27:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:55.098 20:27:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.098 20:27:43 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:55.098 20:27:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:55.358 20:27:43 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:55.358 INFO: shutting down applications... 00:06:55.358 20:27:43 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:55.358 20:27:43 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:55.358 20:27:43 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:55.358 20:27:43 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:57.958 Calling clear_iscsi_subsystem 00:06:57.958 Calling clear_nvmf_subsystem 00:06:57.958 Calling clear_nbd_subsystem 00:06:57.958 Calling clear_ublk_subsystem 00:06:57.958 Calling clear_vhost_blk_subsystem 00:06:57.958 Calling clear_vhost_scsi_subsystem 00:06:57.958 Calling clear_bdev_subsystem 00:06:57.958 20:27:46 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:57.958 20:27:46 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:57.958 20:27:46 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:57.958 20:27:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:57.958 20:27:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:57.958 20:27:46 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:58.217 20:27:46 json_config -- json_config/json_config.sh@349 -- # break 00:06:58.217 20:27:46 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:58.217 20:27:46 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:58.217 20:27:46 json_config -- json_config/common.sh@31 -- # local app=target 00:06:58.217 20:27:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:58.217 20:27:46 json_config -- json_config/common.sh@35 -- # [[ -n 920433 ]] 00:06:58.217 20:27:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 920433 00:06:58.217 20:27:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:58.217 20:27:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.217 20:27:46 json_config -- json_config/common.sh@41 -- # kill -0 920433 00:06:58.217 20:27:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:58.785 20:27:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:58.785 20:27:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.785 20:27:47 json_config -- json_config/common.sh@41 -- # kill -0 920433 00:06:58.785 20:27:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:58.785 20:27:47 json_config -- json_config/common.sh@43 -- # break 00:06:58.785 20:27:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:58.785 20:27:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:58.785 SPDK target shutdown done 00:06:58.785 20:27:47 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:58.785 INFO: relaunching applications... 00:06:58.785 20:27:47 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:58.785 20:27:47 json_config -- json_config/common.sh@9 -- # local app=target 00:06:58.785 20:27:47 json_config -- json_config/common.sh@10 -- # shift 00:06:58.785 20:27:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:58.785 20:27:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:58.785 20:27:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:58.785 20:27:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:58.785 20:27:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:58.785 20:27:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=926018 00:06:58.785 20:27:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:58.785 Waiting for target to run... 00:06:58.785 20:27:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:58.785 20:27:47 json_config -- json_config/common.sh@25 -- # waitforlisten 926018 /var/tmp/spdk_tgt.sock 00:06:58.785 20:27:47 json_config -- common/autotest_common.sh@831 -- # '[' -z 926018 ']' 00:06:58.785 20:27:47 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:58.785 20:27:47 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.785 20:27:47 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:58.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:58.785 20:27:47 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.785 20:27:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.785 [2024-07-26 20:27:47.181364] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:06:58.785 [2024-07-26 20:27:47.181421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid926018 ] 00:06:58.785 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.354 [2024-07-26 20:27:47.628726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.354 [2024-07-26 20:27:47.656460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.640 [2024-07-26 20:27:50.702191] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe2b390/0xe380c0) succeed. 00:07:02.640 [2024-07-26 20:27:50.713235] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe2d580/0xeb8100) succeed. 00:07:02.640 [2024-07-26 20:27:50.762565] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:02.899 20:27:51 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.899 20:27:51 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:02.899 20:27:51 json_config -- json_config/common.sh@26 -- # echo '' 00:07:02.899 00:07:02.899 20:27:51 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:02.899 20:27:51 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:02.899 INFO: Checking if target configuration is the same... 00:07:02.899 20:27:51 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:02.899 20:27:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:02.899 20:27:51 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:02.899 + '[' 2 -ne 2 ']' 00:07:02.899 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:02.899 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:02.899 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:02.899 +++ basename /dev/fd/62 00:07:02.899 ++ mktemp /tmp/62.XXX 00:07:02.899 + tmp_file_1=/tmp/62.YZV 00:07:02.899 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:02.899 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:02.899 + tmp_file_2=/tmp/spdk_tgt_config.json.Fby 00:07:02.899 + ret=0 00:07:02.899 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:03.158 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:03.159 + diff -u /tmp/62.YZV /tmp/spdk_tgt_config.json.Fby 00:07:03.159 + echo 'INFO: JSON config files are the same' 00:07:03.159 INFO: JSON config files are the same 00:07:03.159 + rm /tmp/62.YZV /tmp/spdk_tgt_config.json.Fby 00:07:03.159 + exit 0 00:07:03.159 20:27:51 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:03.159 20:27:51 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:03.159 INFO: changing configuration and checking if this can be detected... 00:07:03.159 20:27:51 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:03.159 20:27:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:03.418 20:27:51 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:03.418 20:27:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:03.418 20:27:51 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:03.418 + '[' 2 -ne 2 ']' 00:07:03.418 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:03.418 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:03.418 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:03.418 +++ basename /dev/fd/62 00:07:03.418 ++ mktemp /tmp/62.XXX 00:07:03.418 + tmp_file_1=/tmp/62.ftE 00:07:03.418 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:03.418 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:03.418 + tmp_file_2=/tmp/spdk_tgt_config.json.rHg 00:07:03.418 + ret=0 00:07:03.418 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:03.676 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:03.677 + diff -u /tmp/62.ftE /tmp/spdk_tgt_config.json.rHg 00:07:03.677 + ret=1 00:07:03.677 + echo '=== Start of file: /tmp/62.ftE ===' 00:07:03.677 + cat /tmp/62.ftE 00:07:03.677 + echo '=== End of file: /tmp/62.ftE ===' 00:07:03.677 + echo '' 00:07:03.677 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rHg ===' 00:07:03.677 + cat /tmp/spdk_tgt_config.json.rHg 00:07:03.677 + echo '=== End of file: /tmp/spdk_tgt_config.json.rHg ===' 00:07:03.677 + echo '' 00:07:03.677 + rm /tmp/62.ftE /tmp/spdk_tgt_config.json.rHg 00:07:03.677 + exit 1 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:03.677 INFO: configuration change detected. 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:03.677 20:27:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.677 20:27:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@321 -- # [[ -n 926018 ]] 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:03.677 20:27:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.677 20:27:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:03.677 20:27:52 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:03.677 20:27:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.677 20:27:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.936 20:27:52 json_config -- json_config/json_config.sh@327 -- # killprocess 926018 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@950 -- # '[' -z 926018 ']' 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@954 -- # kill -0 926018 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@955 -- # uname 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 926018 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 926018' 00:07:03.936 killing process with pid 926018 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@969 -- # kill 926018 00:07:03.936 20:27:52 json_config -- common/autotest_common.sh@974 -- # wait 926018 00:07:06.469 20:27:54 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:06.469 20:27:54 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:06.469 20:27:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.469 20:27:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 20:27:54 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:06.469 20:27:54 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:06.469 INFO: Success 00:07:06.469 20:27:54 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:07:06.469 20:27:54 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:06.469 20:27:54 json_config -- nvmf/common.sh@117 -- # sync 00:07:06.469 20:27:54 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:07:06.469 20:27:54 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:07:06.469 20:27:54 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:06.469 20:27:54 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:06.469 20:27:54 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:07:06.469 00:07:06.469 real 0m25.458s 00:07:06.469 user 0m28.066s 00:07:06.469 sys 0m8.522s 00:07:06.469 20:27:54 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.469 20:27:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 ************************************ 00:07:06.469 END TEST json_config 00:07:06.469 ************************************ 00:07:06.469 20:27:54 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:06.469 20:27:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.469 20:27:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.469 20:27:54 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 ************************************ 00:07:06.469 START TEST json_config_extra_key 00:07:06.469 ************************************ 00:07:06.469 20:27:54 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:06.729 20:27:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.729 20:27:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.729 20:27:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.729 20:27:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.729 20:27:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.729 20:27:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.729 20:27:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:06.729 20:27:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.729 20:27:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:06.729 INFO: launching applications... 00:07:06.729 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=927541 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:06.729 Waiting for target to run... 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 927541 /var/tmp/spdk_tgt.sock 00:07:06.729 20:27:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:06.730 20:27:55 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 927541 ']' 00:07:06.730 20:27:55 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:06.730 20:27:55 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.730 20:27:55 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:06.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:06.730 20:27:55 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.730 20:27:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:06.730 [2024-07-26 20:27:55.106758] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:06.730 [2024-07-26 20:27:55.106815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927541 ] 00:07:06.730 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.296 [2024-07-26 20:27:55.558382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.296 [2024-07-26 20:27:55.589201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.555 20:27:55 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.555 20:27:55 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:07.555 00:07:07.555 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:07.555 INFO: shutting down applications... 00:07:07.555 20:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 927541 ]] 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 927541 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 927541 00:07:07.555 20:27:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:08.124 20:27:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:08.124 20:27:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:08.124 20:27:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 927541 00:07:08.124 20:27:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:08.124 20:27:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:08.124 20:27:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:08.124 20:27:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:08.124 SPDK target shutdown done 00:07:08.124 20:27:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:08.124 Success 00:07:08.124 00:07:08.124 real 0m1.470s 00:07:08.124 user 0m1.018s 00:07:08.124 sys 0m0.590s 00:07:08.124 20:27:56 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.124 20:27:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:08.124 ************************************ 00:07:08.124 END TEST json_config_extra_key 00:07:08.124 ************************************ 00:07:08.124 20:27:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:08.124 20:27:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.124 20:27:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.124 20:27:56 -- common/autotest_common.sh@10 -- # set +x 00:07:08.124 ************************************ 00:07:08.124 START TEST alias_rpc 00:07:08.124 ************************************ 00:07:08.124 20:27:56 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:08.124 * Looking for test storage... 00:07:08.124 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:07:08.124 20:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:08.124 20:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=927851 00:07:08.124 20:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 927851 00:07:08.124 20:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:08.124 20:27:56 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 927851 ']' 00:07:08.124 20:27:56 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.124 20:27:56 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.124 20:27:56 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.124 20:27:56 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.124 20:27:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.124 [2024-07-26 20:27:56.655020] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:08.124 [2024-07-26 20:27:56.655081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927851 ] 00:07:08.384 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.384 [2024-07-26 20:27:56.743260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.384 [2024-07-26 20:27:56.782206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.950 20:27:57 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.951 20:27:57 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:08.951 20:27:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:09.209 20:27:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 927851 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 927851 ']' 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 927851 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 927851 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 927851' 00:07:09.209 killing process with pid 927851 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@969 -- # kill 927851 00:07:09.209 20:27:57 alias_rpc -- common/autotest_common.sh@974 -- # wait 927851 00:07:09.467 00:07:09.467 real 0m1.494s 00:07:09.467 user 0m1.545s 00:07:09.467 sys 0m0.489s 00:07:09.467 20:27:57 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.467 20:27:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.467 ************************************ 00:07:09.467 END TEST alias_rpc 00:07:09.467 ************************************ 00:07:09.726 20:27:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:09.726 20:27:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:09.726 20:27:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.726 20:27:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.726 20:27:58 -- common/autotest_common.sh@10 -- # set +x 00:07:09.726 ************************************ 00:07:09.726 START TEST spdkcli_tcp 00:07:09.726 ************************************ 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:09.726 * Looking for test storage... 00:07:09.726 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=928199 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 928199 00:07:09.726 20:27:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 928199 ']' 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.726 20:27:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.726 [2024-07-26 20:27:58.236415] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:09.726 [2024-07-26 20:27:58.236470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928199 ] 00:07:09.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.986 [2024-07-26 20:27:58.322074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.986 [2024-07-26 20:27:58.363161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.986 [2024-07-26 20:27:58.363165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.554 20:27:59 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.554 20:27:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:10.554 20:27:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=928367 00:07:10.554 20:27:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:10.554 20:27:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:10.814 [ 00:07:10.814 "bdev_malloc_delete", 00:07:10.814 "bdev_malloc_create", 00:07:10.814 "bdev_null_resize", 00:07:10.814 "bdev_null_delete", 00:07:10.814 "bdev_null_create", 00:07:10.814 "bdev_nvme_cuse_unregister", 00:07:10.814 "bdev_nvme_cuse_register", 00:07:10.814 "bdev_opal_new_user", 00:07:10.814 "bdev_opal_set_lock_state", 00:07:10.814 "bdev_opal_delete", 00:07:10.814 "bdev_opal_get_info", 00:07:10.814 "bdev_opal_create", 00:07:10.814 "bdev_nvme_opal_revert", 00:07:10.814 "bdev_nvme_opal_init", 00:07:10.814 "bdev_nvme_send_cmd", 00:07:10.814 "bdev_nvme_get_path_iostat", 00:07:10.814 "bdev_nvme_get_mdns_discovery_info", 00:07:10.814 "bdev_nvme_stop_mdns_discovery", 00:07:10.814 "bdev_nvme_start_mdns_discovery", 00:07:10.814 "bdev_nvme_set_multipath_policy", 00:07:10.814 "bdev_nvme_set_preferred_path", 00:07:10.814 "bdev_nvme_get_io_paths", 00:07:10.814 "bdev_nvme_remove_error_injection", 00:07:10.814 "bdev_nvme_add_error_injection", 00:07:10.814 "bdev_nvme_get_discovery_info", 00:07:10.814 "bdev_nvme_stop_discovery", 00:07:10.814 "bdev_nvme_start_discovery", 00:07:10.814 "bdev_nvme_get_controller_health_info", 00:07:10.814 "bdev_nvme_disable_controller", 00:07:10.814 "bdev_nvme_enable_controller", 00:07:10.814 "bdev_nvme_reset_controller", 00:07:10.814 "bdev_nvme_get_transport_statistics", 00:07:10.814 "bdev_nvme_apply_firmware", 00:07:10.814 "bdev_nvme_detach_controller", 00:07:10.814 "bdev_nvme_get_controllers", 00:07:10.814 "bdev_nvme_attach_controller", 00:07:10.814 "bdev_nvme_set_hotplug", 00:07:10.814 "bdev_nvme_set_options", 00:07:10.814 "bdev_passthru_delete", 00:07:10.814 "bdev_passthru_create", 00:07:10.814 "bdev_lvol_set_parent_bdev", 00:07:10.814 "bdev_lvol_set_parent", 00:07:10.814 "bdev_lvol_check_shallow_copy", 00:07:10.814 "bdev_lvol_start_shallow_copy", 00:07:10.814 "bdev_lvol_grow_lvstore", 00:07:10.814 "bdev_lvol_get_lvols", 00:07:10.814 "bdev_lvol_get_lvstores", 00:07:10.814 "bdev_lvol_delete", 00:07:10.814 "bdev_lvol_set_read_only", 00:07:10.814 "bdev_lvol_resize", 00:07:10.814 "bdev_lvol_decouple_parent", 00:07:10.814 "bdev_lvol_inflate", 00:07:10.814 "bdev_lvol_rename", 00:07:10.814 "bdev_lvol_clone_bdev", 00:07:10.814 "bdev_lvol_clone", 00:07:10.814 "bdev_lvol_snapshot", 00:07:10.814 "bdev_lvol_create", 00:07:10.814 "bdev_lvol_delete_lvstore", 00:07:10.814 "bdev_lvol_rename_lvstore", 00:07:10.814 "bdev_lvol_create_lvstore", 00:07:10.814 "bdev_raid_set_options", 00:07:10.814 "bdev_raid_remove_base_bdev", 00:07:10.814 "bdev_raid_add_base_bdev", 00:07:10.814 "bdev_raid_delete", 00:07:10.814 "bdev_raid_create", 00:07:10.814 "bdev_raid_get_bdevs", 00:07:10.814 "bdev_error_inject_error", 00:07:10.814 "bdev_error_delete", 00:07:10.814 "bdev_error_create", 00:07:10.814 "bdev_split_delete", 00:07:10.814 "bdev_split_create", 00:07:10.814 "bdev_delay_delete", 00:07:10.814 "bdev_delay_create", 00:07:10.814 "bdev_delay_update_latency", 00:07:10.814 "bdev_zone_block_delete", 00:07:10.814 "bdev_zone_block_create", 00:07:10.814 "blobfs_create", 00:07:10.814 "blobfs_detect", 00:07:10.814 "blobfs_set_cache_size", 00:07:10.814 "bdev_aio_delete", 00:07:10.814 "bdev_aio_rescan", 00:07:10.814 "bdev_aio_create", 00:07:10.814 "bdev_ftl_set_property", 00:07:10.814 "bdev_ftl_get_properties", 00:07:10.814 "bdev_ftl_get_stats", 00:07:10.814 "bdev_ftl_unmap", 00:07:10.814 "bdev_ftl_unload", 00:07:10.814 "bdev_ftl_delete", 00:07:10.814 "bdev_ftl_load", 00:07:10.814 "bdev_ftl_create", 00:07:10.814 "bdev_virtio_attach_controller", 00:07:10.814 "bdev_virtio_scsi_get_devices", 00:07:10.814 "bdev_virtio_detach_controller", 00:07:10.814 "bdev_virtio_blk_set_hotplug", 00:07:10.814 "bdev_iscsi_delete", 00:07:10.814 "bdev_iscsi_create", 00:07:10.814 "bdev_iscsi_set_options", 00:07:10.814 "accel_error_inject_error", 00:07:10.814 "ioat_scan_accel_module", 00:07:10.814 "dsa_scan_accel_module", 00:07:10.814 "iaa_scan_accel_module", 00:07:10.814 "keyring_file_remove_key", 00:07:10.814 "keyring_file_add_key", 00:07:10.814 "keyring_linux_set_options", 00:07:10.814 "iscsi_get_histogram", 00:07:10.814 "iscsi_enable_histogram", 00:07:10.814 "iscsi_set_options", 00:07:10.814 "iscsi_get_auth_groups", 00:07:10.814 "iscsi_auth_group_remove_secret", 00:07:10.814 "iscsi_auth_group_add_secret", 00:07:10.814 "iscsi_delete_auth_group", 00:07:10.814 "iscsi_create_auth_group", 00:07:10.814 "iscsi_set_discovery_auth", 00:07:10.814 "iscsi_get_options", 00:07:10.814 "iscsi_target_node_request_logout", 00:07:10.814 "iscsi_target_node_set_redirect", 00:07:10.814 "iscsi_target_node_set_auth", 00:07:10.814 "iscsi_target_node_add_lun", 00:07:10.814 "iscsi_get_stats", 00:07:10.814 "iscsi_get_connections", 00:07:10.814 "iscsi_portal_group_set_auth", 00:07:10.815 "iscsi_start_portal_group", 00:07:10.815 "iscsi_delete_portal_group", 00:07:10.815 "iscsi_create_portal_group", 00:07:10.815 "iscsi_get_portal_groups", 00:07:10.815 "iscsi_delete_target_node", 00:07:10.815 "iscsi_target_node_remove_pg_ig_maps", 00:07:10.815 "iscsi_target_node_add_pg_ig_maps", 00:07:10.815 "iscsi_create_target_node", 00:07:10.815 "iscsi_get_target_nodes", 00:07:10.815 "iscsi_delete_initiator_group", 00:07:10.815 "iscsi_initiator_group_remove_initiators", 00:07:10.815 "iscsi_initiator_group_add_initiators", 00:07:10.815 "iscsi_create_initiator_group", 00:07:10.815 "iscsi_get_initiator_groups", 00:07:10.815 "nvmf_set_crdt", 00:07:10.815 "nvmf_set_config", 00:07:10.815 "nvmf_set_max_subsystems", 00:07:10.815 "nvmf_stop_mdns_prr", 00:07:10.815 "nvmf_publish_mdns_prr", 00:07:10.815 "nvmf_subsystem_get_listeners", 00:07:10.815 "nvmf_subsystem_get_qpairs", 00:07:10.815 "nvmf_subsystem_get_controllers", 00:07:10.815 "nvmf_get_stats", 00:07:10.815 "nvmf_get_transports", 00:07:10.815 "nvmf_create_transport", 00:07:10.815 "nvmf_get_targets", 00:07:10.815 "nvmf_delete_target", 00:07:10.815 "nvmf_create_target", 00:07:10.815 "nvmf_subsystem_allow_any_host", 00:07:10.815 "nvmf_subsystem_remove_host", 00:07:10.815 "nvmf_subsystem_add_host", 00:07:10.815 "nvmf_ns_remove_host", 00:07:10.815 "nvmf_ns_add_host", 00:07:10.815 "nvmf_subsystem_remove_ns", 00:07:10.815 "nvmf_subsystem_add_ns", 00:07:10.815 "nvmf_subsystem_listener_set_ana_state", 00:07:10.815 "nvmf_discovery_get_referrals", 00:07:10.815 "nvmf_discovery_remove_referral", 00:07:10.815 "nvmf_discovery_add_referral", 00:07:10.815 "nvmf_subsystem_remove_listener", 00:07:10.815 "nvmf_subsystem_add_listener", 00:07:10.815 "nvmf_delete_subsystem", 00:07:10.815 "nvmf_create_subsystem", 00:07:10.815 "nvmf_get_subsystems", 00:07:10.815 "env_dpdk_get_mem_stats", 00:07:10.815 "nbd_get_disks", 00:07:10.815 "nbd_stop_disk", 00:07:10.815 "nbd_start_disk", 00:07:10.815 "ublk_recover_disk", 00:07:10.815 "ublk_get_disks", 00:07:10.815 "ublk_stop_disk", 00:07:10.815 "ublk_start_disk", 00:07:10.815 "ublk_destroy_target", 00:07:10.815 "ublk_create_target", 00:07:10.815 "virtio_blk_create_transport", 00:07:10.815 "virtio_blk_get_transports", 00:07:10.815 "vhost_controller_set_coalescing", 00:07:10.815 "vhost_get_controllers", 00:07:10.815 "vhost_delete_controller", 00:07:10.815 "vhost_create_blk_controller", 00:07:10.815 "vhost_scsi_controller_remove_target", 00:07:10.815 "vhost_scsi_controller_add_target", 00:07:10.815 "vhost_start_scsi_controller", 00:07:10.815 "vhost_create_scsi_controller", 00:07:10.815 "thread_set_cpumask", 00:07:10.815 "framework_get_governor", 00:07:10.815 "framework_get_scheduler", 00:07:10.815 "framework_set_scheduler", 00:07:10.815 "framework_get_reactors", 00:07:10.815 "thread_get_io_channels", 00:07:10.815 "thread_get_pollers", 00:07:10.815 "thread_get_stats", 00:07:10.815 "framework_monitor_context_switch", 00:07:10.815 "spdk_kill_instance", 00:07:10.815 "log_enable_timestamps", 00:07:10.815 "log_get_flags", 00:07:10.815 "log_clear_flag", 00:07:10.815 "log_set_flag", 00:07:10.815 "log_get_level", 00:07:10.815 "log_set_level", 00:07:10.815 "log_get_print_level", 00:07:10.815 "log_set_print_level", 00:07:10.815 "framework_enable_cpumask_locks", 00:07:10.815 "framework_disable_cpumask_locks", 00:07:10.815 "framework_wait_init", 00:07:10.815 "framework_start_init", 00:07:10.815 "scsi_get_devices", 00:07:10.815 "bdev_get_histogram", 00:07:10.815 "bdev_enable_histogram", 00:07:10.815 "bdev_set_qos_limit", 00:07:10.815 "bdev_set_qd_sampling_period", 00:07:10.815 "bdev_get_bdevs", 00:07:10.815 "bdev_reset_iostat", 00:07:10.815 "bdev_get_iostat", 00:07:10.815 "bdev_examine", 00:07:10.815 "bdev_wait_for_examine", 00:07:10.815 "bdev_set_options", 00:07:10.815 "notify_get_notifications", 00:07:10.815 "notify_get_types", 00:07:10.815 "accel_get_stats", 00:07:10.815 "accel_set_options", 00:07:10.815 "accel_set_driver", 00:07:10.815 "accel_crypto_key_destroy", 00:07:10.815 "accel_crypto_keys_get", 00:07:10.815 "accel_crypto_key_create", 00:07:10.815 "accel_assign_opc", 00:07:10.815 "accel_get_module_info", 00:07:10.815 "accel_get_opc_assignments", 00:07:10.815 "vmd_rescan", 00:07:10.815 "vmd_remove_device", 00:07:10.815 "vmd_enable", 00:07:10.815 "sock_get_default_impl", 00:07:10.815 "sock_set_default_impl", 00:07:10.815 "sock_impl_set_options", 00:07:10.815 "sock_impl_get_options", 00:07:10.815 "iobuf_get_stats", 00:07:10.815 "iobuf_set_options", 00:07:10.815 "framework_get_pci_devices", 00:07:10.815 "framework_get_config", 00:07:10.815 "framework_get_subsystems", 00:07:10.815 "trace_get_info", 00:07:10.815 "trace_get_tpoint_group_mask", 00:07:10.815 "trace_disable_tpoint_group", 00:07:10.815 "trace_enable_tpoint_group", 00:07:10.815 "trace_clear_tpoint_mask", 00:07:10.815 "trace_set_tpoint_mask", 00:07:10.815 "keyring_get_keys", 00:07:10.815 "spdk_get_version", 00:07:10.815 "rpc_get_methods" 00:07:10.815 ] 00:07:10.815 20:27:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.815 20:27:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:10.815 20:27:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 928199 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 928199 ']' 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 928199 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 928199 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 928199' 00:07:10.815 killing process with pid 928199 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 928199 00:07:10.815 20:27:59 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 928199 00:07:11.075 00:07:11.075 real 0m1.532s 00:07:11.075 user 0m2.773s 00:07:11.075 sys 0m0.526s 00:07:11.075 20:27:59 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.075 20:27:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.075 ************************************ 00:07:11.075 END TEST spdkcli_tcp 00:07:11.075 ************************************ 00:07:11.335 20:27:59 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:11.335 20:27:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.335 20:27:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.335 20:27:59 -- common/autotest_common.sh@10 -- # set +x 00:07:11.335 ************************************ 00:07:11.335 START TEST dpdk_mem_utility 00:07:11.335 ************************************ 00:07:11.335 20:27:59 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:11.335 * Looking for test storage... 00:07:11.335 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:07:11.335 20:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:11.335 20:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.335 20:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=928561 00:07:11.335 20:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 928561 00:07:11.335 20:27:59 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 928561 ']' 00:07:11.335 20:27:59 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.335 20:27:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.335 20:27:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.335 20:27:59 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.335 20:27:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:11.335 [2024-07-26 20:27:59.824800] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:11.335 [2024-07-26 20:27:59.824856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928561 ] 00:07:11.335 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.594 [2024-07-26 20:27:59.908081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.594 [2024-07-26 20:27:59.947109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.162 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.162 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:12.162 20:28:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:12.162 20:28:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:12.162 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.162 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:12.162 { 00:07:12.162 "filename": "/tmp/spdk_mem_dump.txt" 00:07:12.162 } 00:07:12.162 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.162 20:28:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:12.162 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:12.162 1 heaps totaling size 814.000000 MiB 00:07:12.162 size: 814.000000 MiB heap id: 0 00:07:12.162 end heaps---------- 00:07:12.162 8 mempools totaling size 598.116089 MiB 00:07:12.162 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:12.162 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:12.162 size: 84.521057 MiB name: bdev_io_928561 00:07:12.162 size: 51.011292 MiB name: evtpool_928561 00:07:12.163 size: 50.003479 MiB name: msgpool_928561 00:07:12.163 size: 21.763794 MiB name: PDU_Pool 00:07:12.163 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:12.163 size: 0.026123 MiB name: Session_Pool 00:07:12.163 end mempools------- 00:07:12.163 6 memzones totaling size 4.142822 MiB 00:07:12.163 size: 1.000366 MiB name: RG_ring_0_928561 00:07:12.163 size: 1.000366 MiB name: RG_ring_1_928561 00:07:12.163 size: 1.000366 MiB name: RG_ring_4_928561 00:07:12.163 size: 1.000366 MiB name: RG_ring_5_928561 00:07:12.163 size: 0.125366 MiB name: RG_ring_2_928561 00:07:12.163 size: 0.015991 MiB name: RG_ring_3_928561 00:07:12.163 end memzones------- 00:07:12.163 20:28:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:12.423 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:12.423 list of free elements. size: 12.519348 MiB 00:07:12.423 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:12.423 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:12.423 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:12.423 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:12.423 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:12.423 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:12.423 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:12.423 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:12.423 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:12.423 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:12.423 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:12.423 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:12.423 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:12.423 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:12.423 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:12.423 list of standard malloc elements. size: 199.218079 MiB 00:07:12.423 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:12.423 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:12.423 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:12.423 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:12.423 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:12.423 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:12.423 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:12.423 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:12.423 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:12.423 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:12.423 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:12.423 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:12.423 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:12.423 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:12.423 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:12.423 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:12.423 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:12.423 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:12.423 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:12.423 list of memzone associated elements. size: 602.262573 MiB 00:07:12.423 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:12.423 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:12.423 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:12.423 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:12.423 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:12.423 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_928561_0 00:07:12.423 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:12.423 associated memzone info: size: 48.002930 MiB name: MP_evtpool_928561_0 00:07:12.423 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:12.423 associated memzone info: size: 48.002930 MiB name: MP_msgpool_928561_0 00:07:12.423 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:12.423 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:12.423 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:12.423 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:12.423 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:12.423 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_928561 00:07:12.423 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:12.423 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_928561 00:07:12.423 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:12.423 associated memzone info: size: 1.007996 MiB name: MP_evtpool_928561 00:07:12.423 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:12.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:12.423 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:12.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:12.423 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:12.423 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:12.423 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:12.423 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:12.423 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:12.423 associated memzone info: size: 1.000366 MiB name: RG_ring_0_928561 00:07:12.423 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:12.423 associated memzone info: size: 1.000366 MiB name: RG_ring_1_928561 00:07:12.423 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:12.423 associated memzone info: size: 1.000366 MiB name: RG_ring_4_928561 00:07:12.423 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:12.423 associated memzone info: size: 1.000366 MiB name: RG_ring_5_928561 00:07:12.423 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:12.423 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_928561 00:07:12.423 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:12.423 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:12.423 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:12.423 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:12.424 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:12.424 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:12.424 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:12.424 associated memzone info: size: 0.125366 MiB name: RG_ring_2_928561 00:07:12.424 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:12.424 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:12.424 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:12.424 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:12.424 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:12.424 associated memzone info: size: 0.015991 MiB name: RG_ring_3_928561 00:07:12.424 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:12.424 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:12.424 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:12.424 associated memzone info: size: 0.000183 MiB name: MP_msgpool_928561 00:07:12.424 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:12.424 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_928561 00:07:12.424 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:12.424 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:12.424 20:28:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:12.424 20:28:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 928561 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 928561 ']' 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 928561 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 928561 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 928561' 00:07:12.424 killing process with pid 928561 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 928561 00:07:12.424 20:28:00 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 928561 00:07:12.684 00:07:12.684 real 0m1.408s 00:07:12.684 user 0m1.426s 00:07:12.684 sys 0m0.453s 00:07:12.684 20:28:01 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.684 20:28:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:12.684 ************************************ 00:07:12.684 END TEST dpdk_mem_utility 00:07:12.684 ************************************ 00:07:12.684 20:28:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:12.684 20:28:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.684 20:28:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.684 20:28:01 -- common/autotest_common.sh@10 -- # set +x 00:07:12.684 ************************************ 00:07:12.684 START TEST event 00:07:12.684 ************************************ 00:07:12.684 20:28:01 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:12.943 * Looking for test storage... 00:07:12.944 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:12.944 20:28:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:12.944 20:28:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:12.944 20:28:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:12.944 20:28:01 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:12.944 20:28:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.944 20:28:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.944 ************************************ 00:07:12.944 START TEST event_perf 00:07:12.944 ************************************ 00:07:12.944 20:28:01 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:12.944 Running I/O for 1 seconds...[2024-07-26 20:28:01.303021] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:12.944 [2024-07-26 20:28:01.303065] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928865 ] 00:07:12.944 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.944 [2024-07-26 20:28:01.386538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.944 [2024-07-26 20:28:01.428037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.944 [2024-07-26 20:28:01.428135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.944 [2024-07-26 20:28:01.428222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.944 [2024-07-26 20:28:01.428224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.321 Running I/O for 1 seconds... 00:07:14.321 lcore 0: 216627 00:07:14.321 lcore 1: 216626 00:07:14.321 lcore 2: 216626 00:07:14.321 lcore 3: 216627 00:07:14.321 done. 00:07:14.321 00:07:14.321 real 0m1.193s 00:07:14.321 user 0m4.096s 00:07:14.321 sys 0m0.096s 00:07:14.321 20:28:02 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.321 20:28:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:14.321 ************************************ 00:07:14.321 END TEST event_perf 00:07:14.321 ************************************ 00:07:14.321 20:28:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:14.321 20:28:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:14.321 20:28:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.321 20:28:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.321 ************************************ 00:07:14.321 START TEST event_reactor 00:07:14.321 ************************************ 00:07:14.321 20:28:02 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:14.321 [2024-07-26 20:28:02.587194] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:14.321 [2024-07-26 20:28:02.587275] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929060 ] 00:07:14.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.321 [2024-07-26 20:28:02.673778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.321 [2024-07-26 20:28:02.711411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.300 test_start 00:07:15.300 oneshot 00:07:15.300 tick 100 00:07:15.300 tick 100 00:07:15.300 tick 250 00:07:15.300 tick 100 00:07:15.300 tick 100 00:07:15.300 tick 100 00:07:15.300 tick 250 00:07:15.300 tick 500 00:07:15.300 tick 100 00:07:15.300 tick 100 00:07:15.300 tick 250 00:07:15.300 tick 100 00:07:15.300 tick 100 00:07:15.300 test_end 00:07:15.300 00:07:15.300 real 0m1.202s 00:07:15.300 user 0m1.101s 00:07:15.300 sys 0m0.097s 00:07:15.300 20:28:03 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.300 20:28:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:15.300 ************************************ 00:07:15.300 END TEST event_reactor 00:07:15.300 ************************************ 00:07:15.300 20:28:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:15.300 20:28:03 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:15.300 20:28:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.300 20:28:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.300 ************************************ 00:07:15.300 START TEST event_reactor_perf 00:07:15.300 ************************************ 00:07:15.300 20:28:03 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:15.300 [2024-07-26 20:28:03.852122] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:15.300 [2024-07-26 20:28:03.852203] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929344 ] 00:07:15.558 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.558 [2024-07-26 20:28:03.935806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.558 [2024-07-26 20:28:03.972918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.491 test_start 00:07:16.491 test_end 00:07:16.491 Performance: 532966 events per second 00:07:16.491 00:07:16.491 real 0m1.199s 00:07:16.491 user 0m1.097s 00:07:16.491 sys 0m0.099s 00:07:16.491 20:28:05 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.491 20:28:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:16.491 ************************************ 00:07:16.491 END TEST event_reactor_perf 00:07:16.491 ************************************ 00:07:16.749 20:28:05 event -- event/event.sh@49 -- # uname -s 00:07:16.750 20:28:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:16.750 20:28:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:16.750 20:28:05 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.750 20:28:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.750 20:28:05 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.750 ************************************ 00:07:16.750 START TEST event_scheduler 00:07:16.750 ************************************ 00:07:16.750 20:28:05 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:16.750 * Looking for test storage... 00:07:16.750 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:07:16.750 20:28:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:16.750 20:28:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=929650 00:07:16.750 20:28:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.750 20:28:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:16.750 20:28:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 929650 00:07:16.750 20:28:05 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 929650 ']' 00:07:16.750 20:28:05 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.750 20:28:05 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.750 20:28:05 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.750 20:28:05 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.750 20:28:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:16.750 [2024-07-26 20:28:05.262256] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:16.750 [2024-07-26 20:28:05.262310] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929650 ] 00:07:16.750 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.009 [2024-07-26 20:28:05.343968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.009 [2024-07-26 20:28:05.384947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.009 [2024-07-26 20:28:05.385034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.009 [2024-07-26 20:28:05.385116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.009 [2024-07-26 20:28:05.385118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.576 20:28:06 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.576 20:28:06 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:17.576 20:28:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:17.576 20:28:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.576 20:28:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.576 [2024-07-26 20:28:06.083617] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:17.576 [2024-07-26 20:28:06.083642] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:17.576 [2024-07-26 20:28:06.083654] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:17.576 [2024-07-26 20:28:06.083664] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:17.576 [2024-07-26 20:28:06.083671] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:17.576 20:28:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.576 20:28:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:17.576 20:28:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.576 20:28:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.835 [2024-07-26 20:28:06.155246] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:17.835 20:28:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.835 20:28:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:17.835 20:28:06 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.835 20:28:06 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.835 20:28:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:17.835 ************************************ 00:07:17.835 START TEST scheduler_create_thread 00:07:17.835 ************************************ 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.835 2 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.835 3 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.835 4 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.835 5 00:07:17.835 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 6 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 7 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 8 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 9 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 10 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.836 20:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.212 20:28:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.212 20:28:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:19.212 20:28:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:19.212 20:28:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.212 20:28:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.589 20:28:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.589 00:07:20.589 real 0m2.620s 00:07:20.589 user 0m0.024s 00:07:20.589 sys 0m0.006s 00:07:20.589 20:28:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.589 20:28:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.589 ************************************ 00:07:20.589 END TEST scheduler_create_thread 00:07:20.589 ************************************ 00:07:20.589 20:28:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:20.589 20:28:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 929650 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 929650 ']' 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 929650 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929650 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929650' 00:07:20.589 killing process with pid 929650 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 929650 00:07:20.589 20:28:08 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 929650 00:07:20.848 [2024-07-26 20:28:09.297231] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:21.107 00:07:21.107 real 0m4.374s 00:07:21.107 user 0m8.263s 00:07:21.107 sys 0m0.457s 00:07:21.107 20:28:09 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.107 20:28:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:21.107 ************************************ 00:07:21.107 END TEST event_scheduler 00:07:21.107 ************************************ 00:07:21.107 20:28:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:21.107 20:28:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:21.107 20:28:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.107 20:28:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.107 20:28:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.107 ************************************ 00:07:21.107 START TEST app_repeat 00:07:21.107 ************************************ 00:07:21.107 20:28:09 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=930502 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 930502' 00:07:21.107 Process app_repeat pid: 930502 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:21.107 spdk_app_start Round 0 00:07:21.107 20:28:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 930502 /var/tmp/spdk-nbd.sock 00:07:21.107 20:28:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 930502 ']' 00:07:21.107 20:28:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:21.107 20:28:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.107 20:28:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:21.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:21.107 20:28:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.107 20:28:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.107 [2024-07-26 20:28:09.591281] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:21.108 [2024-07-26 20:28:09.591326] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930502 ] 00:07:21.108 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.366 [2024-07-26 20:28:09.676293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.366 [2024-07-26 20:28:09.716764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.366 [2024-07-26 20:28:09.716768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.366 20:28:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.366 20:28:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:21.366 20:28:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.625 Malloc0 00:07:21.625 20:28:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.625 Malloc1 00:07:21.625 20:28:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.625 20:28:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:21.882 /dev/nbd0 00:07:21.882 20:28:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:21.882 20:28:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:21.882 1+0 records in 00:07:21.882 1+0 records out 00:07:21.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202181 s, 20.3 MB/s 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.882 20:28:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:21.882 20:28:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.882 20:28:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.882 20:28:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:22.140 /dev/nbd1 00:07:22.140 20:28:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:22.140 20:28:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.140 1+0 records in 00:07:22.140 1+0 records out 00:07:22.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225735 s, 18.1 MB/s 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:22.140 20:28:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:22.140 20:28:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.140 20:28:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.140 20:28:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.140 20:28:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.140 20:28:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.398 20:28:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.398 { 00:07:22.398 "nbd_device": "/dev/nbd0", 00:07:22.398 "bdev_name": "Malloc0" 00:07:22.398 }, 00:07:22.398 { 00:07:22.398 "nbd_device": "/dev/nbd1", 00:07:22.398 "bdev_name": "Malloc1" 00:07:22.398 } 00:07:22.398 ]' 00:07:22.398 20:28:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.398 { 00:07:22.398 "nbd_device": "/dev/nbd0", 00:07:22.398 "bdev_name": "Malloc0" 00:07:22.398 }, 00:07:22.398 { 00:07:22.398 "nbd_device": "/dev/nbd1", 00:07:22.398 "bdev_name": "Malloc1" 00:07:22.398 } 00:07:22.398 ]' 00:07:22.398 20:28:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.398 20:28:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.398 /dev/nbd1' 00:07:22.398 20:28:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.398 /dev/nbd1' 00:07:22.398 20:28:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:22.399 256+0 records in 00:07:22.399 256+0 records out 00:07:22.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113126 s, 92.7 MB/s 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.399 256+0 records in 00:07:22.399 256+0 records out 00:07:22.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018835 s, 55.7 MB/s 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:22.399 256+0 records in 00:07:22.399 256+0 records out 00:07:22.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208391 s, 50.3 MB/s 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.399 20:28:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.657 20:28:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.916 20:28:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.175 20:28:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.175 20:28:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:23.175 20:28:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:23.433 [2024-07-26 20:28:11.895077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.433 [2024-07-26 20:28:11.929824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.434 [2024-07-26 20:28:11.929827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.434 [2024-07-26 20:28:11.970443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:23.434 [2024-07-26 20:28:11.970486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:26.720 20:28:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:26.721 20:28:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:26.721 spdk_app_start Round 1 00:07:26.721 20:28:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 930502 /var/tmp/spdk-nbd.sock 00:07:26.721 20:28:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 930502 ']' 00:07:26.721 20:28:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:26.721 20:28:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.721 20:28:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:26.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:26.721 20:28:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.721 20:28:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.721 20:28:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.721 20:28:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:26.721 20:28:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.721 Malloc0 00:07:26.721 20:28:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.721 Malloc1 00:07:26.721 20:28:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.721 20:28:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:26.980 /dev/nbd0 00:07:26.980 20:28:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:26.980 20:28:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.980 1+0 records in 00:07:26.980 1+0 records out 00:07:26.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212465 s, 19.3 MB/s 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:26.980 20:28:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:26.980 20:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.980 20:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.980 20:28:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:27.240 /dev/nbd1 00:07:27.240 20:28:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:27.240 20:28:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:27.240 1+0 records in 00:07:27.240 1+0 records out 00:07:27.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022395 s, 18.3 MB/s 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:27.240 20:28:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:27.240 20:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.240 20:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:27.240 20:28:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.240 20:28:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.240 20:28:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:27.500 { 00:07:27.500 "nbd_device": "/dev/nbd0", 00:07:27.500 "bdev_name": "Malloc0" 00:07:27.500 }, 00:07:27.500 { 00:07:27.500 "nbd_device": "/dev/nbd1", 00:07:27.500 "bdev_name": "Malloc1" 00:07:27.500 } 00:07:27.500 ]' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:27.500 { 00:07:27.500 "nbd_device": "/dev/nbd0", 00:07:27.500 "bdev_name": "Malloc0" 00:07:27.500 }, 00:07:27.500 { 00:07:27.500 "nbd_device": "/dev/nbd1", 00:07:27.500 "bdev_name": "Malloc1" 00:07:27.500 } 00:07:27.500 ]' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:27.500 /dev/nbd1' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:27.500 /dev/nbd1' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:27.500 256+0 records in 00:07:27.500 256+0 records out 00:07:27.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112112 s, 93.5 MB/s 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:27.500 256+0 records in 00:07:27.500 256+0 records out 00:07:27.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165396 s, 63.4 MB/s 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:27.500 256+0 records in 00:07:27.500 256+0 records out 00:07:27.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207778 s, 50.5 MB/s 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.500 20:28:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.760 20:28:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.020 20:28:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.280 20:28:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.280 20:28:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:28.280 20:28:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:28.540 [2024-07-26 20:28:16.957884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.540 [2024-07-26 20:28:16.992668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.540 [2024-07-26 20:28:16.992679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.540 [2024-07-26 20:28:17.034439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:28.540 [2024-07-26 20:28:17.034480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.833 20:28:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:31.833 20:28:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:31.833 spdk_app_start Round 2 00:07:31.833 20:28:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 930502 /var/tmp/spdk-nbd.sock 00:07:31.833 20:28:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 930502 ']' 00:07:31.833 20:28:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.833 20:28:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.833 20:28:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.833 20:28:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.833 20:28:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.833 20:28:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.833 20:28:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:31.833 20:28:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.833 Malloc0 00:07:31.833 20:28:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.833 Malloc1 00:07:31.833 20:28:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.833 20:28:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.093 /dev/nbd0 00:07:32.093 20:28:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.093 20:28:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.093 1+0 records in 00:07:32.093 1+0 records out 00:07:32.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232754 s, 17.6 MB/s 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.093 20:28:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:32.093 20:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.093 20:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.093 20:28:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:32.352 /dev/nbd1 00:07:32.352 20:28:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.352 20:28:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.352 1+0 records in 00:07:32.352 1+0 records out 00:07:32.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000132267 s, 31.0 MB/s 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.352 20:28:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:32.352 20:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.352 20:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.352 20:28:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.352 20:28:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.352 20:28:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.611 { 00:07:32.611 "nbd_device": "/dev/nbd0", 00:07:32.611 "bdev_name": "Malloc0" 00:07:32.611 }, 00:07:32.611 { 00:07:32.611 "nbd_device": "/dev/nbd1", 00:07:32.611 "bdev_name": "Malloc1" 00:07:32.611 } 00:07:32.611 ]' 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.611 { 00:07:32.611 "nbd_device": "/dev/nbd0", 00:07:32.611 "bdev_name": "Malloc0" 00:07:32.611 }, 00:07:32.611 { 00:07:32.611 "nbd_device": "/dev/nbd1", 00:07:32.611 "bdev_name": "Malloc1" 00:07:32.611 } 00:07:32.611 ]' 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.611 /dev/nbd1' 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.611 /dev/nbd1' 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.611 256+0 records in 00:07:32.611 256+0 records out 00:07:32.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106608 s, 98.4 MB/s 00:07:32.611 20:28:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.611 20:28:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:32.611 256+0 records in 00:07:32.611 256+0 records out 00:07:32.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192433 s, 54.5 MB/s 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:32.612 256+0 records in 00:07:32.612 256+0 records out 00:07:32.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185009 s, 56.7 MB/s 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.612 20:28:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.871 20:28:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.131 20:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.391 20:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.391 20:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.391 20:28:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.391 20:28:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.391 20:28:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.391 20:28:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.391 20:28:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.391 20:28:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:33.657 [2024-07-26 20:28:22.062666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.657 [2024-07-26 20:28:22.097305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.657 [2024-07-26 20:28:22.097308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.657 [2024-07-26 20:28:22.137931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.657 [2024-07-26 20:28:22.137972] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.995 20:28:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 930502 /var/tmp/spdk-nbd.sock 00:07:36.995 20:28:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 930502 ']' 00:07:36.995 20:28:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.995 20:28:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.995 20:28:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.995 20:28:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.995 20:28:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:36.995 20:28:25 event.app_repeat -- event/event.sh@39 -- # killprocess 930502 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 930502 ']' 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 930502 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930502 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930502' 00:07:36.995 killing process with pid 930502 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@969 -- # kill 930502 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@974 -- # wait 930502 00:07:36.995 spdk_app_start is called in Round 0. 00:07:36.995 Shutdown signal received, stop current app iteration 00:07:36.995 Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 reinitialization... 00:07:36.995 spdk_app_start is called in Round 1. 00:07:36.995 Shutdown signal received, stop current app iteration 00:07:36.995 Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 reinitialization... 00:07:36.995 spdk_app_start is called in Round 2. 00:07:36.995 Shutdown signal received, stop current app iteration 00:07:36.995 Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 reinitialization... 00:07:36.995 spdk_app_start is called in Round 3. 00:07:36.995 Shutdown signal received, stop current app iteration 00:07:36.995 20:28:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:36.995 20:28:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:36.995 00:07:36.995 real 0m15.689s 00:07:36.995 user 0m33.429s 00:07:36.995 sys 0m3.069s 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.995 20:28:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.995 ************************************ 00:07:36.995 END TEST app_repeat 00:07:36.995 ************************************ 00:07:36.995 20:28:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:36.995 20:28:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:36.995 20:28:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.995 20:28:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.995 20:28:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.995 ************************************ 00:07:36.995 START TEST cpu_locks 00:07:36.995 ************************************ 00:07:36.995 20:28:25 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:36.995 * Looking for test storage... 00:07:36.995 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:36.995 20:28:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:36.995 20:28:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:36.995 20:28:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:36.995 20:28:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:36.995 20:28:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.995 20:28:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.995 20:28:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.995 ************************************ 00:07:36.995 START TEST default_locks 00:07:36.995 ************************************ 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=933399 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 933399 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 933399 ']' 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.995 20:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.996 20:28:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.996 [2024-07-26 20:28:25.530831] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:36.996 [2024-07-26 20:28:25.530880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933399 ] 00:07:37.255 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.255 [2024-07-26 20:28:25.616078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.255 [2024-07-26 20:28:25.656568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.824 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.824 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:37.824 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 933399 00:07:37.824 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 933399 00:07:37.824 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.392 lslocks: write error 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 933399 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 933399 ']' 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 933399 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 933399 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 933399' 00:07:38.392 killing process with pid 933399 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 933399 00:07:38.392 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 933399 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 933399 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 933399 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 933399 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 933399 ']' 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.652 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (933399) - No such process 00:07:38.652 ERROR: process (pid: 933399) is no longer running 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:38.652 00:07:38.652 real 0m1.516s 00:07:38.652 user 0m1.577s 00:07:38.652 sys 0m0.513s 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.652 20:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.652 ************************************ 00:07:38.652 END TEST default_locks 00:07:38.652 ************************************ 00:07:38.652 20:28:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:38.652 20:28:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.652 20:28:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.652 20:28:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.652 ************************************ 00:07:38.653 START TEST default_locks_via_rpc 00:07:38.653 ************************************ 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=933697 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 933697 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 933697 ']' 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.653 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.653 [2024-07-26 20:28:27.125123] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:38.653 [2024-07-26 20:28:27.125168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933697 ] 00:07:38.653 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.912 [2024-07-26 20:28:27.210635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.912 [2024-07-26 20:28:27.248275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 933697 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 933697 00:07:39.480 20:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:39.739 20:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 933697 00:07:39.739 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 933697 ']' 00:07:39.739 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 933697 00:07:39.739 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:39.998 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.998 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 933697 00:07:39.998 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.998 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.998 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 933697' 00:07:39.998 killing process with pid 933697 00:07:39.998 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 933697 00:07:39.998 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 933697 00:07:40.257 00:07:40.257 real 0m1.565s 00:07:40.257 user 0m1.622s 00:07:40.257 sys 0m0.540s 00:07:40.257 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.257 20:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.257 ************************************ 00:07:40.257 END TEST default_locks_via_rpc 00:07:40.257 ************************************ 00:07:40.257 20:28:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:40.257 20:28:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.257 20:28:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.257 20:28:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.257 ************************************ 00:07:40.257 START TEST non_locking_app_on_locked_coremask 00:07:40.257 ************************************ 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=933994 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 933994 /var/tmp/spdk.sock 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 933994 ']' 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.257 20:28:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.257 [2024-07-26 20:28:28.765082] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:40.258 [2024-07-26 20:28:28.765131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933994 ] 00:07:40.258 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.517 [2024-07-26 20:28:28.848614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.517 [2024-07-26 20:28:28.886250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=934253 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 934253 /var/tmp/spdk2.sock 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 934253 ']' 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.085 20:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.085 [2024-07-26 20:28:29.609444] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:41.085 [2024-07-26 20:28:29.609499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934253 ] 00:07:41.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.345 [2024-07-26 20:28:29.729618] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.345 [2024-07-26 20:28:29.729654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.345 [2024-07-26 20:28:29.805733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.911 20:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.911 20:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.911 20:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 933994 00:07:41.911 20:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 933994 00:07:41.911 20:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:43.288 lslocks: write error 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 933994 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 933994 ']' 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 933994 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 933994 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 933994' 00:07:43.288 killing process with pid 933994 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 933994 00:07:43.288 20:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 933994 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 934253 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 934253 ']' 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 934253 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 934253 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 934253' 00:07:43.855 killing process with pid 934253 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 934253 00:07:43.855 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 934253 00:07:44.423 00:07:44.423 real 0m3.983s 00:07:44.423 user 0m4.233s 00:07:44.423 sys 0m1.390s 00:07:44.423 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.423 20:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.423 ************************************ 00:07:44.423 END TEST non_locking_app_on_locked_coremask 00:07:44.423 ************************************ 00:07:44.423 20:28:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:44.423 20:28:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.423 20:28:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.423 20:28:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.423 ************************************ 00:07:44.423 START TEST locking_app_on_unlocked_coremask 00:07:44.423 ************************************ 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=934821 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 934821 /var/tmp/spdk.sock 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 934821 ']' 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.423 20:28:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.423 [2024-07-26 20:28:32.832516] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:44.423 [2024-07-26 20:28:32.832564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934821 ] 00:07:44.423 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.423 [2024-07-26 20:28:32.915878] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:44.423 [2024-07-26 20:28:32.915904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.423 [2024-07-26 20:28:32.951590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=934858 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 934858 /var/tmp/spdk2.sock 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 934858 ']' 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.359 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.360 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.360 20:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.360 [2024-07-26 20:28:33.678645] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:45.360 [2024-07-26 20:28:33.678703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934858 ] 00:07:45.360 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.360 [2024-07-26 20:28:33.798510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.360 [2024-07-26 20:28:33.877629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.295 20:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.295 20:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:46.295 20:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 934858 00:07:46.295 20:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 934858 00:07:46.295 20:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:46.860 lslocks: write error 00:07:46.860 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 934821 00:07:46.860 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 934821 ']' 00:07:46.860 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 934821 00:07:46.860 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:46.860 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.860 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 934821 00:07:47.120 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.120 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.120 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 934821' 00:07:47.120 killing process with pid 934821 00:07:47.120 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 934821 00:07:47.120 20:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 934821 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 934858 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 934858 ']' 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 934858 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 934858 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 934858' 00:07:47.689 killing process with pid 934858 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 934858 00:07:47.689 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 934858 00:07:47.949 00:07:47.949 real 0m3.602s 00:07:47.949 user 0m3.852s 00:07:47.949 sys 0m1.237s 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.949 ************************************ 00:07:47.949 END TEST locking_app_on_unlocked_coremask 00:07:47.949 ************************************ 00:07:47.949 20:28:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:47.949 20:28:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.949 20:28:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.949 20:28:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.949 ************************************ 00:07:47.949 START TEST locking_app_on_locked_coremask 00:07:47.949 ************************************ 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=935406 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 935406 /var/tmp/spdk.sock 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 935406 ']' 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.949 20:28:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.209 [2024-07-26 20:28:36.514728] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:48.209 [2024-07-26 20:28:36.514778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935406 ] 00:07:48.209 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.209 [2024-07-26 20:28:36.597781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.209 [2024-07-26 20:28:36.632881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=935669 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 935669 /var/tmp/spdk2.sock 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 935669 /var/tmp/spdk2.sock 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.777 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 935669 /var/tmp/spdk2.sock 00:07:48.778 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 935669 ']' 00:07:48.778 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.778 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.778 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.778 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.778 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.037 [2024-07-26 20:28:37.361015] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:49.037 [2024-07-26 20:28:37.361069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935669 ] 00:07:49.037 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.037 [2024-07-26 20:28:37.478290] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 935406 has claimed it. 00:07:49.037 [2024-07-26 20:28:37.478334] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:49.605 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (935669) - No such process 00:07:49.605 ERROR: process (pid: 935669) is no longer running 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 935406 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 935406 00:07:49.605 20:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.173 lslocks: write error 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 935406 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 935406 ']' 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 935406 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 935406 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 935406' 00:07:50.173 killing process with pid 935406 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 935406 00:07:50.173 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 935406 00:07:50.432 00:07:50.432 real 0m2.494s 00:07:50.433 user 0m2.711s 00:07:50.433 sys 0m0.832s 00:07:50.433 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.433 20:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.433 ************************************ 00:07:50.433 END TEST locking_app_on_locked_coremask 00:07:50.433 ************************************ 00:07:50.692 20:28:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:50.692 20:28:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.692 20:28:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.692 20:28:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.692 ************************************ 00:07:50.692 START TEST locking_overlapped_coremask 00:07:50.692 ************************************ 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=935970 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 935970 /var/tmp/spdk.sock 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 935970 ']' 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.692 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.692 [2024-07-26 20:28:39.088796] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:50.692 [2024-07-26 20:28:39.088842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935970 ] 00:07:50.692 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.692 [2024-07-26 20:28:39.173999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.692 [2024-07-26 20:28:39.216468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.692 [2024-07-26 20:28:39.216563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.692 [2024-07-26 20:28:39.216566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=936047 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 936047 /var/tmp/spdk2.sock 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 936047 /var/tmp/spdk2.sock 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 936047 /var/tmp/spdk2.sock 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 936047 ']' 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.631 20:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.631 [2024-07-26 20:28:39.953015] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:51.631 [2024-07-26 20:28:39.953070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936047 ] 00:07:51.631 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.631 [2024-07-26 20:28:40.078158] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 935970 has claimed it. 00:07:51.631 [2024-07-26 20:28:40.078199] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:52.239 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (936047) - No such process 00:07:52.239 ERROR: process (pid: 936047) is no longer running 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 935970 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 935970 ']' 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 935970 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.239 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 935970 00:07:52.240 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.240 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.240 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 935970' 00:07:52.240 killing process with pid 935970 00:07:52.240 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 935970 00:07:52.240 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 935970 00:07:52.499 00:07:52.499 real 0m1.902s 00:07:52.499 user 0m5.347s 00:07:52.499 sys 0m0.513s 00:07:52.499 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.499 20:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.499 ************************************ 00:07:52.499 END TEST locking_overlapped_coremask 00:07:52.499 ************************************ 00:07:52.499 20:28:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:52.499 20:28:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.499 20:28:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.499 20:28:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.499 ************************************ 00:07:52.499 START TEST locking_overlapped_coremask_via_rpc 00:07:52.499 ************************************ 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=936277 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 936277 /var/tmp/spdk.sock 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 936277 ']' 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.499 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.758 [2024-07-26 20:28:41.071178] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:52.758 [2024-07-26 20:28:41.071225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936277 ] 00:07:52.758 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.758 [2024-07-26 20:28:41.156418] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.758 [2024-07-26 20:28:41.156443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.758 [2024-07-26 20:28:41.194404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.758 [2024-07-26 20:28:41.194501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.758 [2024-07-26 20:28:41.194504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=936538 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 936538 /var/tmp/spdk2.sock 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 936538 ']' 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.325 20:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.584 [2024-07-26 20:28:41.917718] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:53.584 [2024-07-26 20:28:41.917770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936538 ] 00:07:53.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.584 [2024-07-26 20:28:42.034643] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:53.584 [2024-07-26 20:28:42.034676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.584 [2024-07-26 20:28:42.115059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.584 [2024-07-26 20:28:42.118677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.584 [2024-07-26 20:28:42.118678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.151 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.151 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:54.151 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.411 [2024-07-26 20:28:42.723694] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 936277 has claimed it. 00:07:54.411 request: 00:07:54.411 { 00:07:54.411 "method": "framework_enable_cpumask_locks", 00:07:54.411 "req_id": 1 00:07:54.411 } 00:07:54.411 Got JSON-RPC error response 00:07:54.411 response: 00:07:54.411 { 00:07:54.411 "code": -32603, 00:07:54.411 "message": "Failed to claim CPU core: 2" 00:07:54.411 } 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 936277 /var/tmp/spdk.sock 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 936277 ']' 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 936538 /var/tmp/spdk2.sock 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 936538 ']' 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.411 20:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.671 20:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.671 20:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:54.671 20:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:54.671 20:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:54.671 20:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:54.671 20:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:54.671 00:07:54.671 real 0m2.089s 00:07:54.671 user 0m0.809s 00:07:54.671 sys 0m0.212s 00:07:54.671 20:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.671 20:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.671 ************************************ 00:07:54.671 END TEST locking_overlapped_coremask_via_rpc 00:07:54.671 ************************************ 00:07:54.671 20:28:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:54.671 20:28:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 936277 ]] 00:07:54.671 20:28:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 936277 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 936277 ']' 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 936277 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 936277 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 936277' 00:07:54.671 killing process with pid 936277 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 936277 00:07:54.671 20:28:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 936277 00:07:55.240 20:28:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 936538 ]] 00:07:55.240 20:28:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 936538 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 936538 ']' 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 936538 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 936538 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 936538' 00:07:55.240 killing process with pid 936538 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 936538 00:07:55.240 20:28:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 936538 00:07:55.498 20:28:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:55.498 20:28:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:55.498 20:28:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 936277 ]] 00:07:55.498 20:28:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 936277 00:07:55.498 20:28:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 936277 ']' 00:07:55.498 20:28:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 936277 00:07:55.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (936277) - No such process 00:07:55.498 20:28:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 936277 is not found' 00:07:55.498 Process with pid 936277 is not found 00:07:55.498 20:28:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 936538 ]] 00:07:55.498 20:28:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 936538 00:07:55.498 20:28:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 936538 ']' 00:07:55.498 20:28:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 936538 00:07:55.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (936538) - No such process 00:07:55.498 20:28:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 936538 is not found' 00:07:55.498 Process with pid 936538 is not found 00:07:55.498 20:28:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:55.498 00:07:55.498 real 0m18.555s 00:07:55.498 user 0m30.729s 00:07:55.499 sys 0m6.327s 00:07:55.499 20:28:43 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.499 20:28:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.499 ************************************ 00:07:55.499 END TEST cpu_locks 00:07:55.499 ************************************ 00:07:55.499 00:07:55.499 real 0m42.770s 00:07:55.499 user 1m18.945s 00:07:55.499 sys 0m10.512s 00:07:55.499 20:28:43 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.499 20:28:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.499 ************************************ 00:07:55.499 END TEST event 00:07:55.499 ************************************ 00:07:55.499 20:28:43 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:55.499 20:28:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.499 20:28:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.499 20:28:43 -- common/autotest_common.sh@10 -- # set +x 00:07:55.499 ************************************ 00:07:55.499 START TEST thread 00:07:55.499 ************************************ 00:07:55.499 20:28:43 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:55.758 * Looking for test storage... 00:07:55.758 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:55.758 20:28:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:55.758 20:28:44 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:55.758 20:28:44 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.758 20:28:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.758 ************************************ 00:07:55.758 START TEST thread_poller_perf 00:07:55.758 ************************************ 00:07:55.758 20:28:44 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:55.758 [2024-07-26 20:28:44.164831] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:55.758 [2024-07-26 20:28:44.164901] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936919 ] 00:07:55.758 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.758 [2024-07-26 20:28:44.249352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.758 [2024-07-26 20:28:44.287875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.758 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:57.135 ====================================== 00:07:57.135 busy:2508104302 (cyc) 00:07:57.135 total_run_count: 432000 00:07:57.135 tsc_hz: 2500000000 (cyc) 00:07:57.135 ====================================== 00:07:57.135 poller_cost: 5805 (cyc), 2322 (nsec) 00:07:57.135 00:07:57.135 real 0m1.207s 00:07:57.135 user 0m1.109s 00:07:57.135 sys 0m0.094s 00:07:57.135 20:28:45 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.135 20:28:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.135 ************************************ 00:07:57.135 END TEST thread_poller_perf 00:07:57.135 ************************************ 00:07:57.135 20:28:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:57.135 20:28:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:57.135 20:28:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.135 20:28:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.135 ************************************ 00:07:57.135 START TEST thread_poller_perf 00:07:57.135 ************************************ 00:07:57.135 20:28:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:57.135 [2024-07-26 20:28:45.444165] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:57.135 [2024-07-26 20:28:45.444244] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937202 ] 00:07:57.135 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.135 [2024-07-26 20:28:45.526000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.135 [2024-07-26 20:28:45.563698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.135 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:58.072 ====================================== 00:07:58.072 busy:2502027270 (cyc) 00:07:58.072 total_run_count: 5664000 00:07:58.072 tsc_hz: 2500000000 (cyc) 00:07:58.072 ====================================== 00:07:58.072 poller_cost: 441 (cyc), 176 (nsec) 00:07:58.331 00:07:58.331 real 0m1.203s 00:07:58.331 user 0m1.114s 00:07:58.331 sys 0m0.086s 00:07:58.331 20:28:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.331 20:28:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:58.331 ************************************ 00:07:58.331 END TEST thread_poller_perf 00:07:58.331 ************************************ 00:07:58.331 20:28:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:58.331 00:07:58.331 real 0m2.669s 00:07:58.331 user 0m2.328s 00:07:58.331 sys 0m0.355s 00:07:58.331 20:28:46 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.331 20:28:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.331 ************************************ 00:07:58.331 END TEST thread 00:07:58.331 ************************************ 00:07:58.331 20:28:46 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:58.331 20:28:46 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.331 20:28:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.331 20:28:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.331 20:28:46 -- common/autotest_common.sh@10 -- # set +x 00:07:58.331 ************************************ 00:07:58.331 START TEST app_cmdline 00:07:58.331 ************************************ 00:07:58.331 20:28:46 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.331 * Looking for test storage... 00:07:58.331 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:58.331 20:28:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:58.331 20:28:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=937522 00:07:58.331 20:28:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 937522 00:07:58.331 20:28:46 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 937522 ']' 00:07:58.331 20:28:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:58.331 20:28:46 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.331 20:28:46 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.331 20:28:46 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.331 20:28:46 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.331 20:28:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:58.590 [2024-07-26 20:28:46.891246] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:07:58.590 [2024-07-26 20:28:46.891298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937522 ] 00:07:58.590 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.590 [2024-07-26 20:28:46.973015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.590 [2024-07-26 20:28:47.012390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.158 20:28:47 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.158 20:28:47 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:59.158 20:28:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:59.416 { 00:07:59.416 "version": "SPDK v24.09-pre git sha1 cac68eec0", 00:07:59.416 "fields": { 00:07:59.416 "major": 24, 00:07:59.416 "minor": 9, 00:07:59.416 "patch": 0, 00:07:59.417 "suffix": "-pre", 00:07:59.417 "commit": "cac68eec0" 00:07:59.417 } 00:07:59.417 } 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:59.417 20:28:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:59.417 20:28:47 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.676 request: 00:07:59.676 { 00:07:59.676 "method": "env_dpdk_get_mem_stats", 00:07:59.676 "req_id": 1 00:07:59.676 } 00:07:59.676 Got JSON-RPC error response 00:07:59.676 response: 00:07:59.676 { 00:07:59.676 "code": -32601, 00:07:59.676 "message": "Method not found" 00:07:59.676 } 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.676 20:28:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 937522 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 937522 ']' 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 937522 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 937522 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 937522' 00:07:59.676 killing process with pid 937522 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@969 -- # kill 937522 00:07:59.676 20:28:48 app_cmdline -- common/autotest_common.sh@974 -- # wait 937522 00:07:59.936 00:07:59.936 real 0m1.646s 00:07:59.936 user 0m1.900s 00:07:59.936 sys 0m0.480s 00:07:59.936 20:28:48 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.936 20:28:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.936 ************************************ 00:07:59.936 END TEST app_cmdline 00:07:59.936 ************************************ 00:07:59.936 20:28:48 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:59.936 20:28:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.936 20:28:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.936 20:28:48 -- common/autotest_common.sh@10 -- # set +x 00:07:59.936 ************************************ 00:07:59.936 START TEST version 00:07:59.936 ************************************ 00:07:59.936 20:28:48 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:00.195 * Looking for test storage... 00:08:00.195 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:00.195 20:28:48 version -- app/version.sh@17 -- # get_header_version major 00:08:00.195 20:28:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:00.195 20:28:48 version -- app/version.sh@14 -- # cut -f2 00:08:00.195 20:28:48 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.195 20:28:48 version -- app/version.sh@17 -- # major=24 00:08:00.195 20:28:48 version -- app/version.sh@18 -- # get_header_version minor 00:08:00.195 20:28:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:00.195 20:28:48 version -- app/version.sh@14 -- # cut -f2 00:08:00.195 20:28:48 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.195 20:28:48 version -- app/version.sh@18 -- # minor=9 00:08:00.195 20:28:48 version -- app/version.sh@19 -- # get_header_version patch 00:08:00.195 20:28:48 version -- app/version.sh@14 -- # cut -f2 00:08:00.195 20:28:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:00.195 20:28:48 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.195 20:28:48 version -- app/version.sh@19 -- # patch=0 00:08:00.195 20:28:48 version -- app/version.sh@20 -- # get_header_version suffix 00:08:00.195 20:28:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:00.195 20:28:48 version -- app/version.sh@14 -- # cut -f2 00:08:00.195 20:28:48 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.195 20:28:48 version -- app/version.sh@20 -- # suffix=-pre 00:08:00.195 20:28:48 version -- app/version.sh@22 -- # version=24.9 00:08:00.195 20:28:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:00.195 20:28:48 version -- app/version.sh@28 -- # version=24.9rc0 00:08:00.195 20:28:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:00.196 20:28:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.196 20:28:48 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:00.196 20:28:48 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:00.196 00:08:00.196 real 0m0.178s 00:08:00.196 user 0m0.091s 00:08:00.196 sys 0m0.132s 00:08:00.196 20:28:48 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.196 20:28:48 version -- common/autotest_common.sh@10 -- # set +x 00:08:00.196 ************************************ 00:08:00.196 END TEST version 00:08:00.196 ************************************ 00:08:00.196 20:28:48 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:00.196 20:28:48 -- spdk/autotest.sh@201 -- # [[ 0 -eq 1 ]] 00:08:00.196 20:28:48 -- spdk/autotest.sh@207 -- # uname -s 00:08:00.196 20:28:48 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:08:00.196 20:28:48 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:08:00.196 20:28:48 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:08:00.196 20:28:48 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:08:00.196 20:28:48 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:00.196 20:28:48 -- spdk/autotest.sh@269 -- # timing_exit lib 00:08:00.196 20:28:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.196 20:28:48 -- common/autotest_common.sh@10 -- # set +x 00:08:00.196 20:28:48 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:08:00.196 20:28:48 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:08:00.196 20:28:48 -- spdk/autotest.sh@285 -- # '[' 1 -eq 1 ']' 00:08:00.196 20:28:48 -- spdk/autotest.sh@286 -- # export NET_TYPE 00:08:00.196 20:28:48 -- spdk/autotest.sh@289 -- # '[' rdma = rdma ']' 00:08:00.196 20:28:48 -- spdk/autotest.sh@290 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:00.196 20:28:48 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.196 20:28:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.196 20:28:48 -- common/autotest_common.sh@10 -- # set +x 00:08:00.455 ************************************ 00:08:00.455 START TEST nvmf_rdma 00:08:00.455 ************************************ 00:08:00.455 20:28:48 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:00.455 * Looking for test storage... 00:08:00.455 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:00.455 20:28:48 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:00.455 20:28:48 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:00.455 20:28:48 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:00.455 20:28:48 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.455 20:28:48 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.455 20:28:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:00.455 ************************************ 00:08:00.455 START TEST nvmf_target_core 00:08:00.455 ************************************ 00:08:00.455 20:28:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:00.715 * Looking for test storage... 00:08:00.715 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.715 20:28:49 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.716 ************************************ 00:08:00.716 START TEST nvmf_abort 00:08:00.716 ************************************ 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:00.716 * Looking for test storage... 00:08:00.716 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.716 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.717 20:28:49 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:10.698 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:10.698 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:10.698 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:10.698 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:10.699 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:10.699 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:10.699 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:10.699 altname enp217s0f0np0 00:08:10.699 altname ens818f0np0 00:08:10.699 inet 192.168.100.8/24 scope global mlx_0_0 00:08:10.699 valid_lft forever preferred_lft forever 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:10.699 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:10.699 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:10.699 altname enp217s0f1np1 00:08:10.699 altname ens818f1np1 00:08:10.699 inet 192.168.100.9/24 scope global mlx_0_1 00:08:10.699 valid_lft forever preferred_lft forever 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:10.699 192.168.100.9' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:10.699 192.168.100.9' 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:10.699 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:10.699 192.168.100.9' 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=942110 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 942110 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 942110 ']' 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.700 20:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 [2024-07-26 20:28:57.804738] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:08:10.700 [2024-07-26 20:28:57.804802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.700 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.700 [2024-07-26 20:28:57.889244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.700 [2024-07-26 20:28:57.930218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.700 [2024-07-26 20:28:57.930262] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.700 [2024-07-26 20:28:57.930271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.700 [2024-07-26 20:28:57.930280] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.700 [2024-07-26 20:28:57.930286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.700 [2024-07-26 20:28:57.930392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.700 [2024-07-26 20:28:57.930483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.700 [2024-07-26 20:28:57.930485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 [2024-07-26 20:28:58.693183] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae6520/0xaeaa10) succeed. 00:08:10.700 [2024-07-26 20:28:58.709488] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae7ac0/0xb2c0a0) succeed. 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 Malloc0 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 Delay0 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 [2024-07-26 20:28:58.876046] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.700 20:28:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:10.700 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.700 [2024-07-26 20:28:58.981255] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:12.604 Initializing NVMe Controllers 00:08:12.604 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:12.604 controller IO queue size 128 less than required 00:08:12.604 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:12.604 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:12.604 Initialization complete. Launching workers. 00:08:12.604 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51515 00:08:12.604 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51576, failed to submit 62 00:08:12.604 success 51516, unsuccess 60, failed 0 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:12.604 rmmod nvme_rdma 00:08:12.604 rmmod nvme_fabrics 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 942110 ']' 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 942110 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 942110 ']' 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 942110 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:12.604 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.862 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 942110 00:08:12.862 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:12.862 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:12.862 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 942110' 00:08:12.863 killing process with pid 942110 00:08:12.863 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 942110 00:08:12.863 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 942110 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:13.121 00:08:13.121 real 0m12.335s 00:08:13.121 user 0m14.982s 00:08:13.121 sys 0m7.013s 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:13.121 ************************************ 00:08:13.121 END TEST nvmf_abort 00:08:13.121 ************************************ 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.121 ************************************ 00:08:13.121 START TEST nvmf_ns_hotplug_stress 00:08:13.121 ************************************ 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:13.121 * Looking for test storage... 00:08:13.121 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.121 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.380 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.380 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.380 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.380 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.380 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.380 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.380 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.381 20:29:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:21.580 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:21.580 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:21.580 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:21.581 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:21.581 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:21.581 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:21.581 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:21.581 altname enp217s0f0np0 00:08:21.581 altname ens818f0np0 00:08:21.581 inet 192.168.100.8/24 scope global mlx_0_0 00:08:21.581 valid_lft forever preferred_lft forever 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:21.581 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:21.581 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:21.581 altname enp217s0f1np1 00:08:21.581 altname ens818f1np1 00:08:21.581 inet 192.168.100.9/24 scope global mlx_0_1 00:08:21.581 valid_lft forever preferred_lft forever 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:21.581 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:21.582 192.168.100.9' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:21.582 192.168.100.9' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:21.582 192.168.100.9' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=946833 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 946833 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 946833 ']' 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.582 20:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.582 [2024-07-26 20:29:09.810624] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:08:21.582 [2024-07-26 20:29:09.810678] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.582 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.582 [2024-07-26 20:29:09.895160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.582 [2024-07-26 20:29:09.932758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.582 [2024-07-26 20:29:09.932800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.582 [2024-07-26 20:29:09.932809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.582 [2024-07-26 20:29:09.932817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.582 [2024-07-26 20:29:09.932824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.582 [2024-07-26 20:29:09.932925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.582 [2024-07-26 20:29:09.933013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.582 [2024-07-26 20:29:09.933015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.150 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.150 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:22.150 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.150 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.150 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.150 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.150 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:22.150 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:22.409 [2024-07-26 20:29:10.833929] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x551520/0x555a10) succeed. 00:08:22.409 [2024-07-26 20:29:10.843234] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x552ac0/0x5970a0) succeed. 00:08:22.668 20:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:22.668 20:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:22.927 [2024-07-26 20:29:11.311808] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:22.927 20:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:23.186 20:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:23.186 Malloc0 00:08:23.186 20:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:23.444 Delay0 00:08:23.444 20:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.703 20:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:23.703 NULL1 00:08:23.703 20:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:23.962 20:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=947223 00:08:23.962 20:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:23.962 20:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:23.962 20:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.962 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.337 Read completed with error (sct=0, sc=11) 00:08:25.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.337 20:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.337 20:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:25.337 20:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:25.596 true 00:08:25.596 20:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:25.596 20:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.533 20:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.533 20:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:26.533 20:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:26.792 true 00:08:26.792 20:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:26.792 20:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.729 20:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.729 20:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:27.729 20:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:27.988 true 00:08:27.988 20:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:27.988 20:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.923 20:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.923 20:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:28.923 20:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:29.182 true 00:08:29.182 20:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:29.182 20:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.118 20:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.118 20:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:30.118 20:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:30.376 true 00:08:30.376 20:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:30.376 20:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.311 20:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.311 20:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:31.311 20:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:31.311 true 00:08:31.570 20:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:31.570 20:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.505 20:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.505 20:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:32.505 20:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:32.763 true 00:08:32.763 20:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:32.763 20:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.698 20:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.698 20:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:33.698 20:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:33.957 true 00:08:33.957 20:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:33.957 20:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.892 20:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.892 20:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:34.892 20:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:35.151 true 00:08:35.151 20:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:35.151 20:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.087 20:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.087 20:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:36.087 20:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:36.346 true 00:08:36.346 20:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:36.346 20:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.286 20:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.286 20:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:37.286 20:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:37.604 true 00:08:37.604 20:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:37.604 20:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.538 20:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.538 20:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:38.538 20:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:38.797 true 00:08:38.797 20:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:38.797 20:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.733 20:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.733 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.733 20:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:39.733 20:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:39.992 true 00:08:39.992 20:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:39.992 20:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.930 20:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.930 20:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:40.930 20:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:41.190 true 00:08:41.190 20:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:41.190 20:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.127 20:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.127 20:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:42.127 20:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:42.387 true 00:08:42.387 20:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:42.387 20:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.214 20:29:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.214 20:29:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:43.214 20:29:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:43.473 true 00:08:43.473 20:29:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:43.473 20:29:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.410 20:29:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.410 20:29:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:44.410 20:29:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:44.669 true 00:08:44.669 20:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:44.669 20:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.609 20:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.609 20:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:45.609 20:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:45.868 true 00:08:45.868 20:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:45.868 20:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.805 20:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.805 20:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:46.805 20:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:47.065 true 00:08:47.065 20:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:47.065 20:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.003 20:29:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.003 20:29:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:48.003 20:29:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:48.262 true 00:08:48.262 20:29:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:48.262 20:29:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.200 20:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.200 20:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:49.200 20:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:49.459 true 00:08:49.459 20:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:49.459 20:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.395 20:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.395 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.395 20:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:50.395 20:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:50.654 true 00:08:50.654 20:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:50.654 20:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.589 20:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.848 20:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:51.848 20:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:51.848 true 00:08:52.106 20:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:52.106 20:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.674 20:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.933 20:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:52.933 20:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:53.192 true 00:08:53.192 20:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:53.192 20:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.129 20:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.129 20:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:54.129 20:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:54.388 true 00:08:54.388 20:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:54.388 20:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.386 20:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.386 20:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:55.386 20:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:55.386 true 00:08:55.386 20:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:55.386 20:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.644 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.902 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:55.902 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:55.902 true 00:08:55.902 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:55.902 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.160 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.418 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:56.418 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:56.418 true 00:08:56.675 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:56.675 20:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.675 20:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.933 20:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:56.933 20:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:56.933 true 00:08:57.191 20:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:57.191 20:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.191 Initializing NVMe Controllers 00:08:57.191 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:57.191 Controller IO queue size 128, less than required. 00:08:57.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:57.191 Controller IO queue size 128, less than required. 00:08:57.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:57.191 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:57.191 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:57.191 Initialization complete. Launching workers. 00:08:57.191 ======================================================== 00:08:57.191 Latency(us) 00:08:57.191 Device Information : IOPS MiB/s Average min max 00:08:57.191 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6101.83 2.98 19546.41 869.65 1133986.98 00:08:57.191 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34469.93 16.83 3713.27 1251.34 285221.61 00:08:57.191 ======================================================== 00:08:57.191 Total : 40571.77 19.81 6094.51 869.65 1133986.98 00:08:57.191 00:08:57.191 20:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.449 20:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:57.449 20:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:57.449 true 00:08:57.707 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 947223 00:08:57.707 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (947223) - No such process 00:08:57.707 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 947223 00:08:57.707 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.707 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.965 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:57.965 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:57.965 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:57.965 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:57.965 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:58.223 null0 00:08:58.223 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.223 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.223 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:58.223 null1 00:08:58.223 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.223 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.223 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:58.481 null2 00:08:58.481 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.481 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.481 20:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:58.739 null3 00:08:58.740 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.740 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.740 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:58.740 null4 00:08:58.740 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.740 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.740 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:58.998 null5 00:08:58.998 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:58.998 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:58.998 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:59.256 null6 00:08:59.256 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:59.256 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:59.256 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:59.516 null7 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:59.516 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.517 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:59.517 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:59.517 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 953493 953494 953496 953499 953502 953504 953507 953509 00:08:59.517 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:59.517 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:59.517 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:59.517 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.517 20:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.517 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.517 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:59.517 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:59.517 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:59.517 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.517 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:59.517 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.517 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.776 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:59.777 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.777 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.777 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:59.777 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.777 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.777 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.036 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.295 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.555 20:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.814 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.815 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:01.074 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.074 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:01.074 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:01.074 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:01.074 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:01.074 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.074 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:01.074 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:01.332 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:01.590 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:01.591 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:01.591 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:01.591 20:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.591 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.850 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.108 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.367 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.626 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.626 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.626 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:02.626 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.626 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.626 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.626 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.626 20:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.626 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.886 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:03.145 rmmod nvme_rdma 00:09:03.145 rmmod nvme_fabrics 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 946833 ']' 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 946833 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 946833 ']' 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 946833 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 946833 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:03.145 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 946833' 00:09:03.146 killing process with pid 946833 00:09:03.146 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 946833 00:09:03.146 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 946833 00:09:03.405 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.405 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:03.405 00:09:03.405 real 0m50.360s 00:09:03.405 user 3m19.212s 00:09:03.405 sys 0m15.539s 00:09:03.405 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.405 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.405 ************************************ 00:09:03.405 END TEST nvmf_ns_hotplug_stress 00:09:03.405 ************************************ 00:09:03.405 20:29:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:03.405 20:29:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.405 20:29:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.405 20:29:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.664 ************************************ 00:09:03.664 START TEST nvmf_delete_subsystem 00:09:03.664 ************************************ 00:09:03.664 20:29:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:03.664 * Looking for test storage... 00:09:03.664 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.664 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.665 20:29:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:11.778 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:11.779 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:11.779 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:11.779 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:11.779 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.779 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:11.780 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:11.780 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:11.780 altname enp217s0f0np0 00:09:11.780 altname ens818f0np0 00:09:11.780 inet 192.168.100.8/24 scope global mlx_0_0 00:09:11.780 valid_lft forever preferred_lft forever 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:11.780 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:11.780 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:11.780 altname enp217s0f1np1 00:09:11.780 altname ens818f1np1 00:09:11.780 inet 192.168.100.9/24 scope global mlx_0_1 00:09:11.780 valid_lft forever preferred_lft forever 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:11.780 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:12.039 192.168.100.9' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:12.039 192.168.100.9' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:12.039 192.168.100.9' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=958509 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 958509 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 958509 ']' 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.039 20:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.039 [2024-07-26 20:30:00.485611] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:09:12.039 [2024-07-26 20:30:00.485678] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.039 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.039 [2024-07-26 20:30:00.573537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:12.298 [2024-07-26 20:30:00.612351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.298 [2024-07-26 20:30:00.612389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.298 [2024-07-26 20:30:00.612399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.298 [2024-07-26 20:30:00.612408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.298 [2024-07-26 20:30:00.612415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.298 [2024-07-26 20:30:00.612460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.298 [2024-07-26 20:30:00.612463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.866 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.866 [2024-07-26 20:30:01.368095] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7006e0/0x704bd0) succeed. 00:09:12.866 [2024-07-26 20:30:01.377094] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x701be0/0x746260) succeed. 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.125 [2024-07-26 20:30:01.470263] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.125 NULL1 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.125 Delay0 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=958762 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:13.125 20:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:13.125 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.125 [2024-07-26 20:30:01.583086] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:15.026 20:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.026 20:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.026 20:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.471 NVMe io qpair process completion error 00:09:16.471 NVMe io qpair process completion error 00:09:16.471 NVMe io qpair process completion error 00:09:16.471 NVMe io qpair process completion error 00:09:16.471 NVMe io qpair process completion error 00:09:16.471 NVMe io qpair process completion error 00:09:16.471 20:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.471 20:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:16.471 20:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 958762 00:09:16.471 20:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:16.729 20:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:16.729 20:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 958762 00:09:16.729 20:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:17.297 Read completed with error (sct=0, sc=8) 00:09:17.297 starting I/O failed: -6 00:09:17.297 Read completed with error (sct=0, sc=8) 00:09:17.297 starting I/O failed: -6 00:09:17.297 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Write completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.298 Read completed with error (sct=0, sc=8) 00:09:17.298 starting I/O failed: -6 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 starting I/O failed: -6 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Write completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Read completed with error (sct=0, sc=8) 00:09:17.299 Initializing NVMe Controllers 00:09:17.299 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:17.299 Controller IO queue size 128, less than required. 00:09:17.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:17.299 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:17.299 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:17.299 Initialization complete. Launching workers. 00:09:17.299 ======================================================== 00:09:17.299 Latency(us) 00:09:17.299 Device Information : IOPS MiB/s Average min max 00:09:17.299 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.40 0.04 1594980.17 1000111.14 2980887.62 00:09:17.299 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.40 0.04 1596495.81 1000371.69 2982314.99 00:09:17.299 ======================================================== 00:09:17.299 Total : 160.81 0.08 1595737.99 1000111.14 2982314.99 00:09:17.299 00:09:17.299 20:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:17.299 20:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 958762 00:09:17.299 20:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:17.299 [2024-07-26 20:30:05.681307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:17.299 [2024-07-26 20:30:05.681349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:17.299 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 958762 00:09:17.868 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (958762) - No such process 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 958762 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 958762 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 958762 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.868 [2024-07-26 20:30:06.199540] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=960061 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:17.868 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.868 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.868 [2024-07-26 20:30:06.286318] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:18.436 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:18.436 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:18.436 20:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:18.694 20:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:18.694 20:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:18.694 20:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:19.262 20:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:19.262 20:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:19.262 20:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:19.830 20:30:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:19.830 20:30:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:19.830 20:30:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:20.398 20:30:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:20.398 20:30:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:20.398 20:30:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:20.966 20:30:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:20.966 20:30:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:20.967 20:30:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:21.225 20:30:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:21.225 20:30:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:21.225 20:30:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:21.795 20:30:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:21.795 20:30:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:21.795 20:30:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:22.363 20:30:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:22.363 20:30:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:22.364 20:30:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:22.932 20:30:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:22.932 20:30:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:22.932 20:30:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:23.500 20:30:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:23.500 20:30:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:23.500 20:30:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:23.760 20:30:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:23.760 20:30:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:23.760 20:30:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.328 20:30:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.328 20:30:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:24.328 20:30:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.912 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.912 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:24.912 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.912 Initializing NVMe Controllers 00:09:24.912 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:24.912 Controller IO queue size 128, less than required. 00:09:24.912 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:24.912 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:24.912 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:24.912 Initialization complete. Launching workers. 00:09:24.912 ======================================================== 00:09:24.912 Latency(us) 00:09:24.913 Device Information : IOPS MiB/s Average min max 00:09:24.913 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001286.71 1000060.42 1004262.65 00:09:24.913 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002574.54 1000358.52 1005510.88 00:09:24.913 ======================================================== 00:09:24.913 Total : 256.00 0.12 1001930.63 1000060.42 1005510.88 00:09:24.913 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 960061 00:09:25.487 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (960061) - No such process 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 960061 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:25.487 rmmod nvme_rdma 00:09:25.487 rmmod nvme_fabrics 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 958509 ']' 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 958509 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 958509 ']' 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 958509 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 958509 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 958509' 00:09:25.487 killing process with pid 958509 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 958509 00:09:25.487 20:30:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 958509 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:25.747 00:09:25.747 real 0m22.158s 00:09:25.747 user 0m50.582s 00:09:25.747 sys 0m7.614s 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.747 ************************************ 00:09:25.747 END TEST nvmf_delete_subsystem 00:09:25.747 ************************************ 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.747 ************************************ 00:09:25.747 START TEST nvmf_host_management 00:09:25.747 ************************************ 00:09:25.747 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:26.007 * Looking for test storage... 00:09:26.007 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.007 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:09:26.008 20:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:34.134 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:34.134 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.134 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:34.135 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:34.135 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:34.135 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.135 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:34.135 altname enp217s0f0np0 00:09:34.135 altname ens818f0np0 00:09:34.135 inet 192.168.100.8/24 scope global mlx_0_0 00:09:34.135 valid_lft forever preferred_lft forever 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:34.135 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.135 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:34.135 altname enp217s0f1np1 00:09:34.135 altname ens818f1np1 00:09:34.135 inet 192.168.100.9/24 scope global mlx_0_1 00:09:34.135 valid_lft forever preferred_lft forever 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:34.135 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:34.136 192.168.100.9' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:34.136 192.168.100.9' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:34.136 192.168.100.9' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=965382 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 965382 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 965382 ']' 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.136 20:30:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.136 [2024-07-26 20:30:22.596539] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:09:34.136 [2024-07-26 20:30:22.596594] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.136 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.136 [2024-07-26 20:30:22.681389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.395 [2024-07-26 20:30:22.722055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.395 [2024-07-26 20:30:22.722097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.395 [2024-07-26 20:30:22.722106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.395 [2024-07-26 20:30:22.722114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.395 [2024-07-26 20:30:22.722121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.395 [2024-07-26 20:30:22.722228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.395 [2024-07-26 20:30:22.722317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.395 [2024-07-26 20:30:22.722410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.395 [2024-07-26 20:30:22.722411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.962 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.962 [2024-07-26 20:30:23.480542] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61b160/0x61f650) succeed. 00:09:34.962 [2024-07-26 20:30:23.489925] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61c7a0/0x660ce0) succeed. 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.220 Malloc0 00:09:35.220 [2024-07-26 20:30:23.670851] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=965684 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 965684 /var/tmp/bdevperf.sock 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 965684 ']' 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:35.220 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:35.220 { 00:09:35.220 "params": { 00:09:35.220 "name": "Nvme$subsystem", 00:09:35.220 "trtype": "$TEST_TRANSPORT", 00:09:35.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.220 "adrfam": "ipv4", 00:09:35.220 "trsvcid": "$NVMF_PORT", 00:09:35.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.221 "hdgst": ${hdgst:-false}, 00:09:35.221 "ddgst": ${ddgst:-false} 00:09:35.221 }, 00:09:35.221 "method": "bdev_nvme_attach_controller" 00:09:35.221 } 00:09:35.221 EOF 00:09:35.221 )") 00:09:35.221 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:35.221 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:35.221 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:35.221 20:30:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:35.221 "params": { 00:09:35.221 "name": "Nvme0", 00:09:35.221 "trtype": "rdma", 00:09:35.221 "traddr": "192.168.100.8", 00:09:35.221 "adrfam": "ipv4", 00:09:35.221 "trsvcid": "4420", 00:09:35.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:35.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:35.221 "hdgst": false, 00:09:35.221 "ddgst": false 00:09:35.221 }, 00:09:35.221 "method": "bdev_nvme_attach_controller" 00:09:35.221 }' 00:09:35.221 [2024-07-26 20:30:23.772020] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:09:35.221 [2024-07-26 20:30:23.772075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965684 ] 00:09:35.479 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.479 [2024-07-26 20:30:23.860417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.479 [2024-07-26 20:30:23.899496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.737 Running I/O for 10 seconds... 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1644 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1644 -ge 100 ']' 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.305 20:30:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:37.309 [2024-07-26 20:30:25.671055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:09:37.309 [2024-07-26 20:30:25.671488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:09:37.309 [2024-07-26 20:30:25.671756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.309 [2024-07-26 20:30:25.671772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:09:37.310 [2024-07-26 20:30:25.671786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.671801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:09:37.310 [2024-07-26 20:30:25.671815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.671830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:09:37.310 [2024-07-26 20:30:25.671844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.671859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:09:37.310 [2024-07-26 20:30:25.671873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.671889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:09:37.310 [2024-07-26 20:30:25.671902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.671919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.671932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.671948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.671961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.671976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.671990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:09:37.310 [2024-07-26 20:30:25.672277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e11e000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e53e000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e51d000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4fc000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4db000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4ba000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e499000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.310 [2024-07-26 20:30:25.672760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x182400 00:09:37.310 [2024-07-26 20:30:25.672774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.311 [2024-07-26 20:30:25.672791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e415000 len:0x10000 key:0x182400 00:09:37.311 [2024-07-26 20:30:25.672806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.311 [2024-07-26 20:30:25.672821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3f4000 len:0x10000 key:0x182400 00:09:37.311 [2024-07-26 20:30:25.672834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.311 [2024-07-26 20:30:25.672850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3d3000 len:0x10000 key:0x182400 00:09:37.311 [2024-07-26 20:30:25.672863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.311 [2024-07-26 20:30:25.672878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3b2000 len:0x10000 key:0x182400 00:09:37.311 [2024-07-26 20:30:25.672892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.311 [2024-07-26 20:30:25.672906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e391000 len:0x10000 key:0x182400 00:09:37.311 [2024-07-26 20:30:25.672921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.311 [2024-07-26 20:30:25.672936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e370000 len:0x10000 key:0x182400 00:09:37.311 [2024-07-26 20:30:25.672950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:86e23000 sqhd:52b0 p:0 m:0 dnr:0 00:09:37.311 [2024-07-26 20:30:25.674857] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:09:37.311 [2024-07-26 20:30:25.675860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:37.311 task offset: 98304 on job bdev=Nvme0n1 fails 00:09:37.311 00:09:37.311 Latency(us) 00:09:37.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.311 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:37.311 Job: Nvme0n1 ended in about 1.60 seconds with error 00:09:37.311 Verification LBA range: start 0x0 length 0x400 00:09:37.311 Nvme0n1 : 1.60 1105.89 69.12 40.01 0.00 55378.04 2123.37 1026765.62 00:09:37.311 =================================================================================================================== 00:09:37.311 Total : 1105.89 69.12 40.01 0.00 55378.04 2123.37 1026765.62 00:09:37.311 [2024-07-26 20:30:25.677417] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 965684 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:37.311 { 00:09:37.311 "params": { 00:09:37.311 "name": "Nvme$subsystem", 00:09:37.311 "trtype": "$TEST_TRANSPORT", 00:09:37.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.311 "adrfam": "ipv4", 00:09:37.311 "trsvcid": "$NVMF_PORT", 00:09:37.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.311 "hdgst": ${hdgst:-false}, 00:09:37.311 "ddgst": ${ddgst:-false} 00:09:37.311 }, 00:09:37.311 "method": "bdev_nvme_attach_controller" 00:09:37.311 } 00:09:37.311 EOF 00:09:37.311 )") 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:37.311 20:30:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:37.311 "params": { 00:09:37.311 "name": "Nvme0", 00:09:37.311 "trtype": "rdma", 00:09:37.311 "traddr": "192.168.100.8", 00:09:37.311 "adrfam": "ipv4", 00:09:37.311 "trsvcid": "4420", 00:09:37.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:37.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:37.311 "hdgst": false, 00:09:37.311 "ddgst": false 00:09:37.311 }, 00:09:37.311 "method": "bdev_nvme_attach_controller" 00:09:37.311 }' 00:09:37.311 [2024-07-26 20:30:25.732484] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:09:37.311 [2024-07-26 20:30:25.732532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965971 ] 00:09:37.311 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.311 [2024-07-26 20:30:25.817426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.571 [2024-07-26 20:30:25.856498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.571 Running I/O for 1 seconds... 00:09:38.506 00:09:38.506 Latency(us) 00:09:38.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.506 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:38.506 Verification LBA range: start 0x0 length 0x400 00:09:38.506 Nvme0n1 : 1.01 3169.69 198.11 0.00 0.00 19787.33 606.21 32505.86 00:09:38.506 =================================================================================================================== 00:09:38.506 Total : 3169.69 198.11 0.00 0.00 19787.33 606.21 32505.86 00:09:38.764 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 965684 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:38.764 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:38.765 rmmod nvme_rdma 00:09:38.765 rmmod nvme_fabrics 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 965382 ']' 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 965382 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 965382 ']' 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 965382 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.765 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 965382 00:09:39.023 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:39.023 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:39.023 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 965382' 00:09:39.023 killing process with pid 965382 00:09:39.023 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 965382 00:09:39.023 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 965382 00:09:39.282 [2024-07-26 20:30:27.596096] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:39.282 00:09:39.282 real 0m13.401s 00:09:39.282 user 0m25.047s 00:09:39.282 sys 0m7.331s 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.282 ************************************ 00:09:39.282 END TEST nvmf_host_management 00:09:39.282 ************************************ 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.282 ************************************ 00:09:39.282 START TEST nvmf_lvol 00:09:39.282 ************************************ 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:39.282 * Looking for test storage... 00:09:39.282 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.282 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.541 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.542 20:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.662 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:47.663 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:47.663 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:47.663 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:47.663 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:47.663 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:47.663 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:47.663 altname enp217s0f0np0 00:09:47.663 altname ens818f0np0 00:09:47.663 inet 192.168.100.8/24 scope global mlx_0_0 00:09:47.663 valid_lft forever preferred_lft forever 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.663 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:47.664 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:47.664 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:47.664 altname enp217s0f1np1 00:09:47.664 altname ens818f1np1 00:09:47.664 inet 192.168.100.9/24 scope global mlx_0_1 00:09:47.664 valid_lft forever preferred_lft forever 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:47.664 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:47.923 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:47.923 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.923 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.923 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:47.923 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:47.923 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:47.924 192.168.100.9' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:47.924 192.168.100.9' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:47.924 192.168.100.9' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=970414 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 970414 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 970414 ']' 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.924 20:30:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.924 [2024-07-26 20:30:36.332426] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:09:47.924 [2024-07-26 20:30:36.332473] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.924 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.924 [2024-07-26 20:30:36.417078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.924 [2024-07-26 20:30:36.458086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.924 [2024-07-26 20:30:36.458125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.924 [2024-07-26 20:30:36.458139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.924 [2024-07-26 20:30:36.458150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.924 [2024-07-26 20:30:36.458164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.924 [2024-07-26 20:30:36.458225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.924 [2024-07-26 20:30:36.458321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.924 [2024-07-26 20:30:36.458325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.861 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.861 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:48.861 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.861 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.861 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.861 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.861 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:48.861 [2024-07-26 20:30:37.379405] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cfd220/0x1d01710) succeed. 00:09:48.861 [2024-07-26 20:30:37.388343] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cfe7c0/0x1d42da0) succeed. 00:09:49.120 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.379 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:49.379 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:49.379 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:49.379 20:30:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:49.637 20:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:49.896 20:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a4bf7c1f-8485-4310-83cc-47de61c6a6eb 00:09:49.896 20:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a4bf7c1f-8485-4310-83cc-47de61c6a6eb lvol 20 00:09:49.896 20:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cc83fff6-551a-46d9-bb6a-156c65d2fd4d 00:09:49.896 20:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:50.154 20:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cc83fff6-551a-46d9-bb6a-156c65d2fd4d 00:09:50.412 20:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:50.412 [2024-07-26 20:30:38.908980] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:50.412 20:30:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:50.671 20:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=970978 00:09:50.671 20:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:50.671 20:30:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:50.671 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.606 20:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cc83fff6-551a-46d9-bb6a-156c65d2fd4d MY_SNAPSHOT 00:09:51.864 20:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eeefa59b-24e0-43f5-8fd1-69a1e3c706c8 00:09:51.864 20:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cc83fff6-551a-46d9-bb6a-156c65d2fd4d 30 00:09:52.123 20:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eeefa59b-24e0-43f5-8fd1-69a1e3c706c8 MY_CLONE 00:09:52.381 20:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8701053d-883d-4acd-b346-cb9da2de6407 00:09:52.381 20:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8701053d-883d-4acd-b346-cb9da2de6407 00:09:52.381 20:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 970978 00:10:02.355 Initializing NVMe Controllers 00:10:02.355 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:02.355 Controller IO queue size 128, less than required. 00:10:02.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:02.355 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:02.355 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:02.355 Initialization complete. Launching workers. 00:10:02.355 ======================================================== 00:10:02.355 Latency(us) 00:10:02.355 Device Information : IOPS MiB/s Average min max 00:10:02.355 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16795.30 65.61 7623.32 2233.03 36660.11 00:10:02.355 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16792.00 65.59 7624.30 3419.38 39585.50 00:10:02.355 ======================================================== 00:10:02.355 Total : 33587.30 131.20 7623.81 2233.03 39585.50 00:10:02.355 00:10:02.355 20:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:02.355 20:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cc83fff6-551a-46d9-bb6a-156c65d2fd4d 00:10:02.355 20:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4bf7c1f-8485-4310-83cc-47de61c6a6eb 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:02.613 rmmod nvme_rdma 00:10:02.613 rmmod nvme_fabrics 00:10:02.613 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 970414 ']' 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 970414 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 970414 ']' 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 970414 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 970414 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 970414' 00:10:02.614 killing process with pid 970414 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 970414 00:10:02.614 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 970414 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:03.182 00:10:03.182 real 0m23.728s 00:10:03.182 user 1m11.545s 00:10:03.182 sys 0m7.762s 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:03.182 ************************************ 00:10:03.182 END TEST nvmf_lvol 00:10:03.182 ************************************ 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.182 ************************************ 00:10:03.182 START TEST nvmf_lvs_grow 00:10:03.182 ************************************ 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:03.182 * Looking for test storage... 00:10:03.182 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.182 20:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:13.197 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:13.197 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:13.197 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:13.197 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:13.198 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:13.198 20:30:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:13.198 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:13.198 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:13.198 altname enp217s0f0np0 00:10:13.198 altname ens818f0np0 00:10:13.198 inet 192.168.100.8/24 scope global mlx_0_0 00:10:13.198 valid_lft forever preferred_lft forever 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:13.198 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:13.198 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:13.198 altname enp217s0f1np1 00:10:13.198 altname ens818f1np1 00:10:13.198 inet 192.168.100.9/24 scope global mlx_0_1 00:10:13.198 valid_lft forever preferred_lft forever 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:13.198 192.168.100.9' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:13.198 192.168.100.9' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:13.198 192.168.100.9' 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:13.198 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=977188 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 977188 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 977188 ']' 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:13.199 [2024-07-26 20:31:00.229349] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:13.199 [2024-07-26 20:31:00.229399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.199 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.199 [2024-07-26 20:31:00.314363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.199 [2024-07-26 20:31:00.352959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.199 [2024-07-26 20:31:00.353001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.199 [2024-07-26 20:31:00.353012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.199 [2024-07-26 20:31:00.353021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.199 [2024-07-26 20:31:00.353044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.199 [2024-07-26 20:31:00.353068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:13.199 [2024-07-26 20:31:00.659521] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24b3c80/0x24b8170) succeed. 00:10:13.199 [2024-07-26 20:31:00.668661] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24b5180/0x24f9800) succeed. 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:13.199 ************************************ 00:10:13.199 START TEST lvs_grow_clean 00:10:13.199 ************************************ 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:13.199 20:31:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6b8cf8d1-45e3-4745-983d-8273dec0825c lvol 150 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2258083f-c4f7-4a8e-afd8-1192c9e679b4 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:13.199 [2024-07-26 20:31:01.641858] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:13.199 [2024-07-26 20:31:01.641915] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:13.199 true 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:13.199 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:13.458 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:13.459 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:13.459 20:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2258083f-c4f7-4a8e-afd8-1192c9e679b4 00:10:13.718 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:13.977 [2024-07-26 20:31:02.288028] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=977617 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 977617 /var/tmp/bdevperf.sock 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 977617 ']' 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.978 20:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:13.978 [2024-07-26 20:31:02.513765] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:13.978 [2024-07-26 20:31:02.513816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid977617 ] 00:10:14.237 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.237 [2024-07-26 20:31:02.599417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.237 [2024-07-26 20:31:02.637330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.804 20:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.804 20:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:14.804 20:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:15.063 Nvme0n1 00:10:15.063 20:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:15.321 [ 00:10:15.321 { 00:10:15.321 "name": "Nvme0n1", 00:10:15.321 "aliases": [ 00:10:15.321 "2258083f-c4f7-4a8e-afd8-1192c9e679b4" 00:10:15.321 ], 00:10:15.321 "product_name": "NVMe disk", 00:10:15.322 "block_size": 4096, 00:10:15.322 "num_blocks": 38912, 00:10:15.322 "uuid": "2258083f-c4f7-4a8e-afd8-1192c9e679b4", 00:10:15.322 "assigned_rate_limits": { 00:10:15.322 "rw_ios_per_sec": 0, 00:10:15.322 "rw_mbytes_per_sec": 0, 00:10:15.322 "r_mbytes_per_sec": 0, 00:10:15.322 "w_mbytes_per_sec": 0 00:10:15.322 }, 00:10:15.322 "claimed": false, 00:10:15.322 "zoned": false, 00:10:15.322 "supported_io_types": { 00:10:15.322 "read": true, 00:10:15.322 "write": true, 00:10:15.322 "unmap": true, 00:10:15.322 "flush": true, 00:10:15.322 "reset": true, 00:10:15.322 "nvme_admin": true, 00:10:15.322 "nvme_io": true, 00:10:15.322 "nvme_io_md": false, 00:10:15.322 "write_zeroes": true, 00:10:15.322 "zcopy": false, 00:10:15.322 "get_zone_info": false, 00:10:15.322 "zone_management": false, 00:10:15.322 "zone_append": false, 00:10:15.322 "compare": true, 00:10:15.322 "compare_and_write": true, 00:10:15.322 "abort": true, 00:10:15.322 "seek_hole": false, 00:10:15.322 "seek_data": false, 00:10:15.322 "copy": true, 00:10:15.322 "nvme_iov_md": false 00:10:15.322 }, 00:10:15.322 "memory_domains": [ 00:10:15.322 { 00:10:15.322 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:15.322 "dma_device_type": 0 00:10:15.322 } 00:10:15.322 ], 00:10:15.322 "driver_specific": { 00:10:15.322 "nvme": [ 00:10:15.322 { 00:10:15.322 "trid": { 00:10:15.322 "trtype": "RDMA", 00:10:15.322 "adrfam": "IPv4", 00:10:15.322 "traddr": "192.168.100.8", 00:10:15.322 "trsvcid": "4420", 00:10:15.322 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:15.322 }, 00:10:15.322 "ctrlr_data": { 00:10:15.322 "cntlid": 1, 00:10:15.322 "vendor_id": "0x8086", 00:10:15.322 "model_number": "SPDK bdev Controller", 00:10:15.322 "serial_number": "SPDK0", 00:10:15.322 "firmware_revision": "24.09", 00:10:15.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:15.322 "oacs": { 00:10:15.322 "security": 0, 00:10:15.322 "format": 0, 00:10:15.322 "firmware": 0, 00:10:15.322 "ns_manage": 0 00:10:15.322 }, 00:10:15.322 "multi_ctrlr": true, 00:10:15.322 "ana_reporting": false 00:10:15.322 }, 00:10:15.322 "vs": { 00:10:15.322 "nvme_version": "1.3" 00:10:15.322 }, 00:10:15.322 "ns_data": { 00:10:15.322 "id": 1, 00:10:15.322 "can_share": true 00:10:15.322 } 00:10:15.322 } 00:10:15.322 ], 00:10:15.322 "mp_policy": "active_passive" 00:10:15.322 } 00:10:15.322 } 00:10:15.322 ] 00:10:15.322 20:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=977885 00:10:15.322 20:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:15.322 20:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:15.322 Running I/O for 10 seconds... 00:10:16.259 Latency(us) 00:10:16.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.259 Nvme0n1 : 1.00 35171.00 137.39 0.00 0.00 0.00 0.00 0.00 00:10:16.259 =================================================================================================================== 00:10:16.259 Total : 35171.00 137.39 0.00 0.00 0.00 0.00 0.00 00:10:16.259 00:10:17.197 20:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:17.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.456 Nvme0n1 : 2.00 35568.50 138.94 0.00 0.00 0.00 0.00 0.00 00:10:17.456 =================================================================================================================== 00:10:17.456 Total : 35568.50 138.94 0.00 0.00 0.00 0.00 0.00 00:10:17.456 00:10:17.456 true 00:10:17.456 20:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:17.456 20:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:17.715 20:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:17.715 20:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:17.715 20:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 977885 00:10:18.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.283 Nvme0n1 : 3.00 35659.00 139.29 0.00 0.00 0.00 0.00 0.00 00:10:18.283 =================================================================================================================== 00:10:18.283 Total : 35659.00 139.29 0.00 0.00 0.00 0.00 0.00 00:10:18.283 00:10:19.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.660 Nvme0n1 : 4.00 35767.75 139.72 0.00 0.00 0.00 0.00 0.00 00:10:19.660 =================================================================================================================== 00:10:19.660 Total : 35767.75 139.72 0.00 0.00 0.00 0.00 0.00 00:10:19.660 00:10:20.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.597 Nvme0n1 : 5.00 35845.60 140.02 0.00 0.00 0.00 0.00 0.00 00:10:20.597 =================================================================================================================== 00:10:20.597 Total : 35845.60 140.02 0.00 0.00 0.00 0.00 0.00 00:10:20.597 00:10:21.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.533 Nvme0n1 : 6.00 35878.00 140.15 0.00 0.00 0.00 0.00 0.00 00:10:21.533 =================================================================================================================== 00:10:21.533 Total : 35878.00 140.15 0.00 0.00 0.00 0.00 0.00 00:10:21.533 00:10:22.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.470 Nvme0n1 : 7.00 35904.43 140.25 0.00 0.00 0.00 0.00 0.00 00:10:22.470 =================================================================================================================== 00:10:22.470 Total : 35904.43 140.25 0.00 0.00 0.00 0.00 0.00 00:10:22.470 00:10:23.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.406 Nvme0n1 : 8.00 35947.50 140.42 0.00 0.00 0.00 0.00 0.00 00:10:23.406 =================================================================================================================== 00:10:23.406 Total : 35947.50 140.42 0.00 0.00 0.00 0.00 0.00 00:10:23.406 00:10:24.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.342 Nvme0n1 : 9.00 35971.00 140.51 0.00 0.00 0.00 0.00 0.00 00:10:24.342 =================================================================================================================== 00:10:24.342 Total : 35971.00 140.51 0.00 0.00 0.00 0.00 0.00 00:10:24.342 00:10:25.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.278 Nvme0n1 : 10.00 36000.70 140.63 0.00 0.00 0.00 0.00 0.00 00:10:25.278 =================================================================================================================== 00:10:25.278 Total : 36000.70 140.63 0.00 0.00 0.00 0.00 0.00 00:10:25.278 00:10:25.537 00:10:25.537 Latency(us) 00:10:25.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.537 Nvme0n1 : 10.00 36000.87 140.63 0.00 0.00 3552.54 2372.40 14050.92 00:10:25.537 =================================================================================================================== 00:10:25.537 Total : 36000.87 140.63 0.00 0.00 3552.54 2372.40 14050.92 00:10:25.537 0 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 977617 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 977617 ']' 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 977617 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 977617 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 977617' 00:10:25.537 killing process with pid 977617 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 977617 00:10:25.537 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.537 00:10:25.537 Latency(us) 00:10:25.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.537 =================================================================================================================== 00:10:25.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.537 20:31:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 977617 00:10:25.537 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:25.796 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:26.055 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:26.055 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:26.055 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:26.055 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:26.055 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:26.314 [2024-07-26 20:31:14.751405] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:26.314 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:26.573 request: 00:10:26.573 { 00:10:26.573 "uuid": "6b8cf8d1-45e3-4745-983d-8273dec0825c", 00:10:26.573 "method": "bdev_lvol_get_lvstores", 00:10:26.573 "req_id": 1 00:10:26.573 } 00:10:26.573 Got JSON-RPC error response 00:10:26.573 response: 00:10:26.573 { 00:10:26.573 "code": -19, 00:10:26.573 "message": "No such device" 00:10:26.573 } 00:10:26.573 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:26.573 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:26.573 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:26.573 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:26.573 20:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:26.833 aio_bdev 00:10:26.833 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2258083f-c4f7-4a8e-afd8-1192c9e679b4 00:10:26.833 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=2258083f-c4f7-4a8e-afd8-1192c9e679b4 00:10:26.833 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.833 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:26.833 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.833 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.833 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:26.833 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2258083f-c4f7-4a8e-afd8-1192c9e679b4 -t 2000 00:10:27.092 [ 00:10:27.093 { 00:10:27.093 "name": "2258083f-c4f7-4a8e-afd8-1192c9e679b4", 00:10:27.093 "aliases": [ 00:10:27.093 "lvs/lvol" 00:10:27.093 ], 00:10:27.093 "product_name": "Logical Volume", 00:10:27.093 "block_size": 4096, 00:10:27.093 "num_blocks": 38912, 00:10:27.093 "uuid": "2258083f-c4f7-4a8e-afd8-1192c9e679b4", 00:10:27.093 "assigned_rate_limits": { 00:10:27.093 "rw_ios_per_sec": 0, 00:10:27.093 "rw_mbytes_per_sec": 0, 00:10:27.093 "r_mbytes_per_sec": 0, 00:10:27.093 "w_mbytes_per_sec": 0 00:10:27.093 }, 00:10:27.093 "claimed": false, 00:10:27.093 "zoned": false, 00:10:27.093 "supported_io_types": { 00:10:27.093 "read": true, 00:10:27.093 "write": true, 00:10:27.093 "unmap": true, 00:10:27.093 "flush": false, 00:10:27.093 "reset": true, 00:10:27.093 "nvme_admin": false, 00:10:27.093 "nvme_io": false, 00:10:27.093 "nvme_io_md": false, 00:10:27.093 "write_zeroes": true, 00:10:27.093 "zcopy": false, 00:10:27.093 "get_zone_info": false, 00:10:27.093 "zone_management": false, 00:10:27.093 "zone_append": false, 00:10:27.093 "compare": false, 00:10:27.093 "compare_and_write": false, 00:10:27.093 "abort": false, 00:10:27.093 "seek_hole": true, 00:10:27.093 "seek_data": true, 00:10:27.093 "copy": false, 00:10:27.093 "nvme_iov_md": false 00:10:27.093 }, 00:10:27.093 "driver_specific": { 00:10:27.093 "lvol": { 00:10:27.093 "lvol_store_uuid": "6b8cf8d1-45e3-4745-983d-8273dec0825c", 00:10:27.093 "base_bdev": "aio_bdev", 00:10:27.093 "thin_provision": false, 00:10:27.093 "num_allocated_clusters": 38, 00:10:27.093 "snapshot": false, 00:10:27.093 "clone": false, 00:10:27.093 "esnap_clone": false 00:10:27.093 } 00:10:27.093 } 00:10:27.093 } 00:10:27.093 ] 00:10:27.093 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:27.093 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:27.093 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:27.093 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:27.093 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:27.093 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:27.352 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:27.352 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2258083f-c4f7-4a8e-afd8-1192c9e679b4 00:10:27.611 20:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b8cf8d1-45e3-4745-983d-8273dec0825c 00:10:27.611 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:27.870 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:27.870 00:10:27.870 real 0m15.587s 00:10:27.870 user 0m15.427s 00:10:27.870 sys 0m1.250s 00:10:27.870 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.870 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:27.870 ************************************ 00:10:27.870 END TEST lvs_grow_clean 00:10:27.870 ************************************ 00:10:27.870 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:27.870 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:27.870 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.870 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:28.129 ************************************ 00:10:28.129 START TEST lvs_grow_dirty 00:10:28.129 ************************************ 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:28.129 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:28.389 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:28.389 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:28.389 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:28.680 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:28.680 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:28.680 20:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7798009c-acdb-4772-a1ad-293c5b7595fa lvol 150 00:10:28.680 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0e98392b-0d42-45d0-baeb-b61e38f9513c 00:10:28.680 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:28.680 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:28.939 [2024-07-26 20:31:17.300364] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:28.939 [2024-07-26 20:31:17.300417] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:28.939 true 00:10:28.939 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:28.939 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:29.198 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:29.199 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:29.199 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0e98392b-0d42-45d0-baeb-b61e38f9513c 00:10:29.458 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:29.458 [2024-07-26 20:31:17.970565] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:29.458 20:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=980343 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 980343 /var/tmp/bdevperf.sock 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 980343 ']' 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:29.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.717 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:29.717 [2024-07-26 20:31:18.161859] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:29.717 [2024-07-26 20:31:18.161908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980343 ] 00:10:29.717 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.717 [2024-07-26 20:31:18.247987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.977 [2024-07-26 20:31:18.285954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.977 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.977 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:29.977 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:30.236 Nvme0n1 00:10:30.236 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:30.496 [ 00:10:30.496 { 00:10:30.496 "name": "Nvme0n1", 00:10:30.496 "aliases": [ 00:10:30.496 "0e98392b-0d42-45d0-baeb-b61e38f9513c" 00:10:30.496 ], 00:10:30.496 "product_name": "NVMe disk", 00:10:30.496 "block_size": 4096, 00:10:30.496 "num_blocks": 38912, 00:10:30.496 "uuid": "0e98392b-0d42-45d0-baeb-b61e38f9513c", 00:10:30.496 "assigned_rate_limits": { 00:10:30.496 "rw_ios_per_sec": 0, 00:10:30.496 "rw_mbytes_per_sec": 0, 00:10:30.496 "r_mbytes_per_sec": 0, 00:10:30.496 "w_mbytes_per_sec": 0 00:10:30.496 }, 00:10:30.496 "claimed": false, 00:10:30.496 "zoned": false, 00:10:30.496 "supported_io_types": { 00:10:30.496 "read": true, 00:10:30.496 "write": true, 00:10:30.496 "unmap": true, 00:10:30.496 "flush": true, 00:10:30.496 "reset": true, 00:10:30.496 "nvme_admin": true, 00:10:30.496 "nvme_io": true, 00:10:30.496 "nvme_io_md": false, 00:10:30.496 "write_zeroes": true, 00:10:30.496 "zcopy": false, 00:10:30.496 "get_zone_info": false, 00:10:30.496 "zone_management": false, 00:10:30.496 "zone_append": false, 00:10:30.496 "compare": true, 00:10:30.496 "compare_and_write": true, 00:10:30.496 "abort": true, 00:10:30.496 "seek_hole": false, 00:10:30.496 "seek_data": false, 00:10:30.496 "copy": true, 00:10:30.496 "nvme_iov_md": false 00:10:30.496 }, 00:10:30.496 "memory_domains": [ 00:10:30.496 { 00:10:30.496 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:30.496 "dma_device_type": 0 00:10:30.496 } 00:10:30.496 ], 00:10:30.496 "driver_specific": { 00:10:30.496 "nvme": [ 00:10:30.496 { 00:10:30.496 "trid": { 00:10:30.496 "trtype": "RDMA", 00:10:30.496 "adrfam": "IPv4", 00:10:30.496 "traddr": "192.168.100.8", 00:10:30.496 "trsvcid": "4420", 00:10:30.496 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:30.496 }, 00:10:30.496 "ctrlr_data": { 00:10:30.496 "cntlid": 1, 00:10:30.496 "vendor_id": "0x8086", 00:10:30.496 "model_number": "SPDK bdev Controller", 00:10:30.496 "serial_number": "SPDK0", 00:10:30.496 "firmware_revision": "24.09", 00:10:30.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:30.496 "oacs": { 00:10:30.496 "security": 0, 00:10:30.496 "format": 0, 00:10:30.496 "firmware": 0, 00:10:30.496 "ns_manage": 0 00:10:30.496 }, 00:10:30.496 "multi_ctrlr": true, 00:10:30.496 "ana_reporting": false 00:10:30.496 }, 00:10:30.496 "vs": { 00:10:30.496 "nvme_version": "1.3" 00:10:30.496 }, 00:10:30.496 "ns_data": { 00:10:30.496 "id": 1, 00:10:30.496 "can_share": true 00:10:30.496 } 00:10:30.496 } 00:10:30.496 ], 00:10:30.496 "mp_policy": "active_passive" 00:10:30.496 } 00:10:30.496 } 00:10:30.496 ] 00:10:30.496 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=980525 00:10:30.496 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:30.496 20:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:30.496 Running I/O for 10 seconds... 00:10:31.446 Latency(us) 00:10:31.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.446 Nvme0n1 : 1.00 35300.00 137.89 0.00 0.00 0.00 0.00 0.00 00:10:31.446 =================================================================================================================== 00:10:31.446 Total : 35300.00 137.89 0.00 0.00 0.00 0.00 0.00 00:10:31.446 00:10:32.382 20:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:32.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.382 Nvme0n1 : 2.00 35234.00 137.63 0.00 0.00 0.00 0.00 0.00 00:10:32.382 =================================================================================================================== 00:10:32.382 Total : 35234.00 137.63 0.00 0.00 0.00 0.00 0.00 00:10:32.382 00:10:32.641 true 00:10:32.641 20:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:32.641 20:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:32.641 20:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:32.641 20:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:32.641 20:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 980525 00:10:33.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.579 Nvme0n1 : 3.00 35466.67 138.54 0.00 0.00 0.00 0.00 0.00 00:10:33.579 =================================================================================================================== 00:10:33.579 Total : 35466.67 138.54 0.00 0.00 0.00 0.00 0.00 00:10:33.579 00:10:34.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.516 Nvme0n1 : 4.00 35625.25 139.16 0.00 0.00 0.00 0.00 0.00 00:10:34.516 =================================================================================================================== 00:10:34.516 Total : 35625.25 139.16 0.00 0.00 0.00 0.00 0.00 00:10:34.516 00:10:35.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.452 Nvme0n1 : 5.00 35732.20 139.58 0.00 0.00 0.00 0.00 0.00 00:10:35.452 =================================================================================================================== 00:10:35.452 Total : 35732.20 139.58 0.00 0.00 0.00 0.00 0.00 00:10:35.452 00:10:36.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.391 Nvme0n1 : 6.00 35803.33 139.86 0.00 0.00 0.00 0.00 0.00 00:10:36.391 =================================================================================================================== 00:10:36.391 Total : 35803.33 139.86 0.00 0.00 0.00 0.00 0.00 00:10:36.391 00:10:37.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.770 Nvme0n1 : 7.00 35868.00 140.11 0.00 0.00 0.00 0.00 0.00 00:10:37.770 =================================================================================================================== 00:10:37.770 Total : 35868.00 140.11 0.00 0.00 0.00 0.00 0.00 00:10:37.770 00:10:38.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.707 Nvme0n1 : 8.00 35916.50 140.30 0.00 0.00 0.00 0.00 0.00 00:10:38.707 =================================================================================================================== 00:10:38.707 Total : 35916.50 140.30 0.00 0.00 0.00 0.00 0.00 00:10:38.707 00:10:39.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.645 Nvme0n1 : 9.00 35950.67 140.43 0.00 0.00 0.00 0.00 0.00 00:10:39.645 =================================================================================================================== 00:10:39.645 Total : 35950.67 140.43 0.00 0.00 0.00 0.00 0.00 00:10:39.645 00:10:40.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.582 Nvme0n1 : 10.00 35984.10 140.56 0.00 0.00 0.00 0.00 0.00 00:10:40.582 =================================================================================================================== 00:10:40.582 Total : 35984.10 140.56 0.00 0.00 0.00 0.00 0.00 00:10:40.582 00:10:40.582 00:10:40.582 Latency(us) 00:10:40.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.583 Nvme0n1 : 10.00 35986.24 140.57 0.00 0.00 3553.98 2136.47 12320.77 00:10:40.583 =================================================================================================================== 00:10:40.583 Total : 35986.24 140.57 0.00 0.00 3553.98 2136.47 12320.77 00:10:40.583 0 00:10:40.583 20:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 980343 00:10:40.583 20:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 980343 ']' 00:10:40.583 20:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 980343 00:10:40.583 20:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:40.583 20:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.583 20:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 980343 00:10:40.583 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:40.583 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:40.583 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 980343' 00:10:40.583 killing process with pid 980343 00:10:40.583 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 980343 00:10:40.583 Received shutdown signal, test time was about 10.000000 seconds 00:10:40.583 00:10:40.583 Latency(us) 00:10:40.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.583 =================================================================================================================== 00:10:40.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:40.583 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 980343 00:10:40.842 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:40.842 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:41.101 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:41.101 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:41.360 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:41.360 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:41.360 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 977188 00:10:41.360 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 977188 00:10:41.360 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 977188 Killed "${NVMF_APP[@]}" "$@" 00:10:41.360 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=982471 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 982471 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 982471 ']' 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.361 20:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.361 [2024-07-26 20:31:29.848109] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:41.361 [2024-07-26 20:31:29.848163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.361 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.620 [2024-07-26 20:31:29.935536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.620 [2024-07-26 20:31:29.974963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.620 [2024-07-26 20:31:29.975000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.620 [2024-07-26 20:31:29.975010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.620 [2024-07-26 20:31:29.975020] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.620 [2024-07-26 20:31:29.975028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.620 [2024-07-26 20:31:29.975049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.188 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.188 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:42.188 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.188 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.188 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:42.188 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.188 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:42.447 [2024-07-26 20:31:30.849567] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:42.447 [2024-07-26 20:31:30.849675] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:42.447 [2024-07-26 20:31:30.849703] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:42.447 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:42.447 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0e98392b-0d42-45d0-baeb-b61e38f9513c 00:10:42.447 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0e98392b-0d42-45d0-baeb-b61e38f9513c 00:10:42.447 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.447 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:42.447 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.447 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.447 20:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:42.706 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0e98392b-0d42-45d0-baeb-b61e38f9513c -t 2000 00:10:42.706 [ 00:10:42.706 { 00:10:42.706 "name": "0e98392b-0d42-45d0-baeb-b61e38f9513c", 00:10:42.706 "aliases": [ 00:10:42.706 "lvs/lvol" 00:10:42.706 ], 00:10:42.706 "product_name": "Logical Volume", 00:10:42.706 "block_size": 4096, 00:10:42.706 "num_blocks": 38912, 00:10:42.706 "uuid": "0e98392b-0d42-45d0-baeb-b61e38f9513c", 00:10:42.706 "assigned_rate_limits": { 00:10:42.706 "rw_ios_per_sec": 0, 00:10:42.706 "rw_mbytes_per_sec": 0, 00:10:42.706 "r_mbytes_per_sec": 0, 00:10:42.706 "w_mbytes_per_sec": 0 00:10:42.706 }, 00:10:42.706 "claimed": false, 00:10:42.706 "zoned": false, 00:10:42.706 "supported_io_types": { 00:10:42.706 "read": true, 00:10:42.706 "write": true, 00:10:42.706 "unmap": true, 00:10:42.706 "flush": false, 00:10:42.706 "reset": true, 00:10:42.706 "nvme_admin": false, 00:10:42.706 "nvme_io": false, 00:10:42.706 "nvme_io_md": false, 00:10:42.706 "write_zeroes": true, 00:10:42.706 "zcopy": false, 00:10:42.706 "get_zone_info": false, 00:10:42.706 "zone_management": false, 00:10:42.706 "zone_append": false, 00:10:42.706 "compare": false, 00:10:42.706 "compare_and_write": false, 00:10:42.706 "abort": false, 00:10:42.706 "seek_hole": true, 00:10:42.706 "seek_data": true, 00:10:42.706 "copy": false, 00:10:42.706 "nvme_iov_md": false 00:10:42.706 }, 00:10:42.706 "driver_specific": { 00:10:42.706 "lvol": { 00:10:42.706 "lvol_store_uuid": "7798009c-acdb-4772-a1ad-293c5b7595fa", 00:10:42.706 "base_bdev": "aio_bdev", 00:10:42.706 "thin_provision": false, 00:10:42.706 "num_allocated_clusters": 38, 00:10:42.706 "snapshot": false, 00:10:42.706 "clone": false, 00:10:42.706 "esnap_clone": false 00:10:42.706 } 00:10:42.706 } 00:10:42.706 } 00:10:42.706 ] 00:10:42.706 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:42.706 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:42.706 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:42.965 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:42.965 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:42.965 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:43.225 [2024-07-26 20:31:31.709959] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:43.225 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:43.485 request: 00:10:43.485 { 00:10:43.485 "uuid": "7798009c-acdb-4772-a1ad-293c5b7595fa", 00:10:43.485 "method": "bdev_lvol_get_lvstores", 00:10:43.485 "req_id": 1 00:10:43.485 } 00:10:43.485 Got JSON-RPC error response 00:10:43.485 response: 00:10:43.485 { 00:10:43.485 "code": -19, 00:10:43.485 "message": "No such device" 00:10:43.485 } 00:10:43.485 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:43.485 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:43.485 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:43.485 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:43.485 20:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:43.743 aio_bdev 00:10:43.743 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0e98392b-0d42-45d0-baeb-b61e38f9513c 00:10:43.743 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0e98392b-0d42-45d0-baeb-b61e38f9513c 00:10:43.743 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.743 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:43.743 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.743 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.743 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:43.743 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0e98392b-0d42-45d0-baeb-b61e38f9513c -t 2000 00:10:44.032 [ 00:10:44.032 { 00:10:44.032 "name": "0e98392b-0d42-45d0-baeb-b61e38f9513c", 00:10:44.032 "aliases": [ 00:10:44.032 "lvs/lvol" 00:10:44.032 ], 00:10:44.032 "product_name": "Logical Volume", 00:10:44.032 "block_size": 4096, 00:10:44.032 "num_blocks": 38912, 00:10:44.032 "uuid": "0e98392b-0d42-45d0-baeb-b61e38f9513c", 00:10:44.032 "assigned_rate_limits": { 00:10:44.032 "rw_ios_per_sec": 0, 00:10:44.032 "rw_mbytes_per_sec": 0, 00:10:44.032 "r_mbytes_per_sec": 0, 00:10:44.032 "w_mbytes_per_sec": 0 00:10:44.032 }, 00:10:44.032 "claimed": false, 00:10:44.032 "zoned": false, 00:10:44.032 "supported_io_types": { 00:10:44.032 "read": true, 00:10:44.032 "write": true, 00:10:44.032 "unmap": true, 00:10:44.032 "flush": false, 00:10:44.032 "reset": true, 00:10:44.033 "nvme_admin": false, 00:10:44.033 "nvme_io": false, 00:10:44.033 "nvme_io_md": false, 00:10:44.033 "write_zeroes": true, 00:10:44.033 "zcopy": false, 00:10:44.033 "get_zone_info": false, 00:10:44.033 "zone_management": false, 00:10:44.033 "zone_append": false, 00:10:44.033 "compare": false, 00:10:44.033 "compare_and_write": false, 00:10:44.033 "abort": false, 00:10:44.033 "seek_hole": true, 00:10:44.033 "seek_data": true, 00:10:44.033 "copy": false, 00:10:44.033 "nvme_iov_md": false 00:10:44.033 }, 00:10:44.033 "driver_specific": { 00:10:44.033 "lvol": { 00:10:44.033 "lvol_store_uuid": "7798009c-acdb-4772-a1ad-293c5b7595fa", 00:10:44.033 "base_bdev": "aio_bdev", 00:10:44.033 "thin_provision": false, 00:10:44.033 "num_allocated_clusters": 38, 00:10:44.033 "snapshot": false, 00:10:44.033 "clone": false, 00:10:44.033 "esnap_clone": false 00:10:44.033 } 00:10:44.033 } 00:10:44.033 } 00:10:44.033 ] 00:10:44.033 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:44.033 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:44.033 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:44.292 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:44.292 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:44.292 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:44.292 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:44.292 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0e98392b-0d42-45d0-baeb-b61e38f9513c 00:10:44.551 20:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7798009c-acdb-4772-a1ad-293c5b7595fa 00:10:44.551 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:44.810 00:10:44.810 real 0m16.873s 00:10:44.810 user 0m43.285s 00:10:44.810 sys 0m3.324s 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:44.810 ************************************ 00:10:44.810 END TEST lvs_grow_dirty 00:10:44.810 ************************************ 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:44.810 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:44.810 nvmf_trace.0 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:45.069 rmmod nvme_rdma 00:10:45.069 rmmod nvme_fabrics 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 982471 ']' 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 982471 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 982471 ']' 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 982471 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 982471 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 982471' 00:10:45.069 killing process with pid 982471 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 982471 00:10:45.069 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 982471 00:10:45.328 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:45.328 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:45.328 00:10:45.328 real 0m42.131s 00:10:45.328 user 1m5.101s 00:10:45.328 sys 0m11.620s 00:10:45.328 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.328 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:45.328 ************************************ 00:10:45.328 END TEST nvmf_lvs_grow 00:10:45.328 ************************************ 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.329 ************************************ 00:10:45.329 START TEST nvmf_bdev_io_wait 00:10:45.329 ************************************ 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:45.329 * Looking for test storage... 00:10:45.329 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.329 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.588 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:45.589 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:45.589 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:10:45.589 20:31:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:53.711 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:53.711 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:53.711 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:53.711 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:53.711 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:53.712 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.712 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:53.712 altname enp217s0f0np0 00:10:53.712 altname ens818f0np0 00:10:53.712 inet 192.168.100.8/24 scope global mlx_0_0 00:10:53.712 valid_lft forever preferred_lft forever 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:53.712 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.712 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:53.712 altname enp217s0f1np1 00:10:53.712 altname ens818f1np1 00:10:53.712 inet 192.168.100.9/24 scope global mlx_0_1 00:10:53.712 valid_lft forever preferred_lft forever 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:53.712 192.168.100.9' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:53.712 192.168.100.9' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:53.712 192.168.100.9' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.712 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=987020 00:10:53.713 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:53.713 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 987020 00:10:53.713 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 987020 ']' 00:10:53.713 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.713 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.713 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.713 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.713 20:31:41 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.713 [2024-07-26 20:31:41.740426] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:53.713 [2024-07-26 20:31:41.740485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.713 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.713 [2024-07-26 20:31:41.825935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.713 [2024-07-26 20:31:41.868897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.713 [2024-07-26 20:31:41.868940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.713 [2024-07-26 20:31:41.868950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.713 [2024-07-26 20:31:41.868959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.713 [2024-07-26 20:31:41.868965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.713 [2024-07-26 20:31:41.869013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.713 [2024-07-26 20:31:41.869110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.713 [2024-07-26 20:31:41.869197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.713 [2024-07-26 20:31:41.869199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.281 [2024-07-26 20:31:42.692466] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f13dd0/0x1f182c0) succeed. 00:10:54.281 [2024-07-26 20:31:42.701314] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f15410/0x1f59950) succeed. 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.281 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.540 Malloc0 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.541 [2024-07-26 20:31:42.881510] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=987284 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=987286 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:54.541 { 00:10:54.541 "params": { 00:10:54.541 "name": "Nvme$subsystem", 00:10:54.541 "trtype": "$TEST_TRANSPORT", 00:10:54.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.541 "adrfam": "ipv4", 00:10:54.541 "trsvcid": "$NVMF_PORT", 00:10:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.541 "hdgst": ${hdgst:-false}, 00:10:54.541 "ddgst": ${ddgst:-false} 00:10:54.541 }, 00:10:54.541 "method": "bdev_nvme_attach_controller" 00:10:54.541 } 00:10:54.541 EOF 00:10:54.541 )") 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=987288 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:54.541 { 00:10:54.541 "params": { 00:10:54.541 "name": "Nvme$subsystem", 00:10:54.541 "trtype": "$TEST_TRANSPORT", 00:10:54.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.541 "adrfam": "ipv4", 00:10:54.541 "trsvcid": "$NVMF_PORT", 00:10:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.541 "hdgst": ${hdgst:-false}, 00:10:54.541 "ddgst": ${ddgst:-false} 00:10:54.541 }, 00:10:54.541 "method": "bdev_nvme_attach_controller" 00:10:54.541 } 00:10:54.541 EOF 00:10:54.541 )") 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=987291 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:54.541 { 00:10:54.541 "params": { 00:10:54.541 "name": "Nvme$subsystem", 00:10:54.541 "trtype": "$TEST_TRANSPORT", 00:10:54.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.541 "adrfam": "ipv4", 00:10:54.541 "trsvcid": "$NVMF_PORT", 00:10:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.541 "hdgst": ${hdgst:-false}, 00:10:54.541 "ddgst": ${ddgst:-false} 00:10:54.541 }, 00:10:54.541 "method": "bdev_nvme_attach_controller" 00:10:54.541 } 00:10:54.541 EOF 00:10:54.541 )") 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:54.541 { 00:10:54.541 "params": { 00:10:54.541 "name": "Nvme$subsystem", 00:10:54.541 "trtype": "$TEST_TRANSPORT", 00:10:54.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.541 "adrfam": "ipv4", 00:10:54.541 "trsvcid": "$NVMF_PORT", 00:10:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.541 "hdgst": ${hdgst:-false}, 00:10:54.541 "ddgst": ${ddgst:-false} 00:10:54.541 }, 00:10:54.541 "method": "bdev_nvme_attach_controller" 00:10:54.541 } 00:10:54.541 EOF 00:10:54.541 )") 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 987284 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:54.541 "params": { 00:10:54.541 "name": "Nvme1", 00:10:54.541 "trtype": "rdma", 00:10:54.541 "traddr": "192.168.100.8", 00:10:54.541 "adrfam": "ipv4", 00:10:54.541 "trsvcid": "4420", 00:10:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.541 "hdgst": false, 00:10:54.541 "ddgst": false 00:10:54.541 }, 00:10:54.541 "method": "bdev_nvme_attach_controller" 00:10:54.541 }' 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:54.541 "params": { 00:10:54.541 "name": "Nvme1", 00:10:54.541 "trtype": "rdma", 00:10:54.541 "traddr": "192.168.100.8", 00:10:54.541 "adrfam": "ipv4", 00:10:54.541 "trsvcid": "4420", 00:10:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.541 "hdgst": false, 00:10:54.541 "ddgst": false 00:10:54.541 }, 00:10:54.541 "method": "bdev_nvme_attach_controller" 00:10:54.541 }' 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:54.541 "params": { 00:10:54.541 "name": "Nvme1", 00:10:54.541 "trtype": "rdma", 00:10:54.541 "traddr": "192.168.100.8", 00:10:54.541 "adrfam": "ipv4", 00:10:54.541 "trsvcid": "4420", 00:10:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.541 "hdgst": false, 00:10:54.541 "ddgst": false 00:10:54.541 }, 00:10:54.541 "method": "bdev_nvme_attach_controller" 00:10:54.541 }' 00:10:54.541 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:54.542 20:31:42 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:54.542 "params": { 00:10:54.542 "name": "Nvme1", 00:10:54.542 "trtype": "rdma", 00:10:54.542 "traddr": "192.168.100.8", 00:10:54.542 "adrfam": "ipv4", 00:10:54.542 "trsvcid": "4420", 00:10:54.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.542 "hdgst": false, 00:10:54.542 "ddgst": false 00:10:54.542 }, 00:10:54.542 "method": "bdev_nvme_attach_controller" 00:10:54.542 }' 00:10:54.542 [2024-07-26 20:31:42.933425] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:54.542 [2024-07-26 20:31:42.933426] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:54.542 [2024-07-26 20:31:42.933483] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 20:31:42.933484] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:54.542 --proc-type=auto ] 00:10:54.542 [2024-07-26 20:31:42.934198] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:54.542 [2024-07-26 20:31:42.934243] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:54.542 [2024-07-26 20:31:42.938728] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:10:54.542 [2024-07-26 20:31:42.938781] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:54.542 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.542 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.799 [2024-07-26 20:31:43.136138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.800 [2024-07-26 20:31:43.161870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:54.800 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.800 [2024-07-26 20:31:43.239583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.800 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.800 [2024-07-26 20:31:43.269225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.800 [2024-07-26 20:31:43.294181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.800 [2024-07-26 20:31:43.318247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:55.058 [2024-07-26 20:31:43.395554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.058 [2024-07-26 20:31:43.425825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:55.058 Running I/O for 1 seconds... 00:10:55.058 Running I/O for 1 seconds... 00:10:55.058 Running I/O for 1 seconds... 00:10:55.058 Running I/O for 1 seconds... 00:10:55.993 00:10:55.993 Latency(us) 00:10:55.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.993 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:55.993 Nvme1n1 : 1.00 262687.64 1026.12 0.00 0.00 485.99 196.61 1966.08 00:10:55.993 =================================================================================================================== 00:10:55.993 Total : 262687.64 1026.12 0.00 0.00 485.99 196.61 1966.08 00:10:55.993 00:10:55.993 Latency(us) 00:10:55.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.993 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:55.993 Nvme1n1 : 1.01 17132.38 66.92 0.00 0.00 7448.03 4272.95 13526.63 00:10:55.993 =================================================================================================================== 00:10:55.993 Total : 17132.38 66.92 0.00 0.00 7448.03 4272.95 13526.63 00:10:56.251 00:10:56.252 Latency(us) 00:10:56.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.252 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:56.252 Nvme1n1 : 1.00 17749.98 69.34 0.00 0.00 7191.27 4351.59 15518.92 00:10:56.252 =================================================================================================================== 00:10:56.252 Total : 17749.98 69.34 0.00 0.00 7191.27 4351.59 15518.92 00:10:56.252 00:10:56.252 Latency(us) 00:10:56.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.252 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:56.252 Nvme1n1 : 1.01 15270.74 59.65 0.00 0.00 8357.75 4377.80 20027.80 00:10:56.252 =================================================================================================================== 00:10:56.252 Total : 15270.74 59.65 0.00 0.00 8357.75 4377.80 20027.80 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 987286 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 987288 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 987291 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.511 20:31:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:56.511 rmmod nvme_rdma 00:10:56.511 rmmod nvme_fabrics 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 987020 ']' 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 987020 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 987020 ']' 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 987020 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.511 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 987020 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 987020' 00:10:56.770 killing process with pid 987020 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 987020 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 987020 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:56.770 00:10:56.770 real 0m11.572s 00:10:56.770 user 0m21.134s 00:10:56.770 sys 0m7.472s 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.770 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.770 ************************************ 00:10:56.770 END TEST nvmf_bdev_io_wait 00:10:56.770 ************************************ 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.029 ************************************ 00:10:57.029 START TEST nvmf_queue_depth 00:10:57.029 ************************************ 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:57.029 * Looking for test storage... 00:10:57.029 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.029 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.030 20:31:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:05.155 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.155 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.155 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:05.156 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:05.156 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:05.156 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:05.156 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:05.156 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:05.416 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:05.416 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:05.416 altname enp217s0f0np0 00:11:05.416 altname ens818f0np0 00:11:05.416 inet 192.168.100.8/24 scope global mlx_0_0 00:11:05.416 valid_lft forever preferred_lft forever 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:05.416 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:05.416 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:05.416 altname enp217s0f1np1 00:11:05.416 altname ens818f1np1 00:11:05.416 inet 192.168.100.9/24 scope global mlx_0_1 00:11:05.416 valid_lft forever preferred_lft forever 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:05.416 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:05.417 192.168.100.9' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:05.417 192.168.100.9' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:05.417 192.168.100.9' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=991761 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 991761 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 991761 ']' 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.417 20:31:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:05.417 [2024-07-26 20:31:53.951275] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:11:05.417 [2024-07-26 20:31:53.951331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.675 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.675 [2024-07-26 20:31:54.040002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.675 [2024-07-26 20:31:54.078830] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.675 [2024-07-26 20:31:54.078867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.675 [2024-07-26 20:31:54.078880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.675 [2024-07-26 20:31:54.078888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.675 [2024-07-26 20:31:54.078896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.675 [2024-07-26 20:31:54.078923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.244 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.244 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:06.244 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.244 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:06.244 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.504 [2024-07-26 20:31:54.827063] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d7af40/0x1d7f430) succeed. 00:11:06.504 [2024-07-26 20:31:54.835629] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d7c440/0x1dc0ac0) succeed. 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.504 Malloc0 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.504 [2024-07-26 20:31:54.921488] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=992030 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 992030 /var/tmp/bdevperf.sock 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 992030 ']' 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:06.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.504 20:31:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.504 [2024-07-26 20:31:54.957799] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:11:06.504 [2024-07-26 20:31:54.957844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992030 ] 00:11:06.504 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.504 [2024-07-26 20:31:55.040005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.764 [2024-07-26 20:31:55.078695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.764 20:31:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.764 20:31:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:06.764 20:31:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:06.764 20:31:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.764 20:31:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.764 NVMe0n1 00:11:06.764 20:31:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.764 20:31:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:07.023 Running I/O for 10 seconds... 00:11:17.063 00:11:17.063 Latency(us) 00:11:17.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.063 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:17.063 Verification LBA range: start 0x0 length 0x4000 00:11:17.063 NVMe0n1 : 10.03 18195.33 71.08 0.00 0.00 56117.36 6081.74 36490.44 00:11:17.063 =================================================================================================================== 00:11:17.063 Total : 18195.33 71.08 0.00 0.00 56117.36 6081.74 36490.44 00:11:17.063 0 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 992030 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 992030 ']' 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 992030 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 992030 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 992030' 00:11:17.063 killing process with pid 992030 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 992030 00:11:17.063 Received shutdown signal, test time was about 10.000000 seconds 00:11:17.063 00:11:17.063 Latency(us) 00:11:17.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.063 =================================================================================================================== 00:11:17.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:17.063 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 992030 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:17.322 rmmod nvme_rdma 00:11:17.322 rmmod nvme_fabrics 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 991761 ']' 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 991761 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 991761 ']' 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 991761 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 991761 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 991761' 00:11:17.322 killing process with pid 991761 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 991761 00:11:17.322 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 991761 00:11:17.581 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.581 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:17.581 00:11:17.581 real 0m20.604s 00:11:17.581 user 0m25.300s 00:11:17.581 sys 0m6.988s 00:11:17.581 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.581 20:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:17.581 ************************************ 00:11:17.581 END TEST nvmf_queue_depth 00:11:17.581 ************************************ 00:11:17.581 20:32:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:17.581 20:32:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:17.581 20:32:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.581 20:32:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:17.581 ************************************ 00:11:17.581 START TEST nvmf_target_multipath 00:11:17.581 ************************************ 00:11:17.581 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:17.839 * Looking for test storage... 00:11:17.839 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:17.839 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.839 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:17.839 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.839 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.839 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.840 20:32:06 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:25.966 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:25.966 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:25.966 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:25.966 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:25.966 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:25.967 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:25.967 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:25.967 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.967 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:25.967 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:25.967 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:26.232 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.232 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:26.232 altname enp217s0f0np0 00:11:26.232 altname ens818f0np0 00:11:26.232 inet 192.168.100.8/24 scope global mlx_0_0 00:11:26.232 valid_lft forever preferred_lft forever 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:26.232 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.232 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:26.232 altname enp217s0f1np1 00:11:26.232 altname ens818f1np1 00:11:26.232 inet 192.168.100.9/24 scope global mlx_0_1 00:11:26.232 valid_lft forever preferred_lft forever 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:26.232 192.168.100.9' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:26.232 192.168.100.9' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:26.232 192.168.100.9' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:26.232 run this test only with TCP transport for now 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:26.232 rmmod nvme_rdma 00:11:26.232 rmmod nvme_fabrics 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:26.232 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:26.233 00:11:26.233 real 0m8.675s 00:11:26.233 user 0m2.331s 00:11:26.233 sys 0m6.557s 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.233 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:26.233 ************************************ 00:11:26.233 END TEST nvmf_target_multipath 00:11:26.233 ************************************ 00:11:26.493 20:32:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:26.493 20:32:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.493 20:32:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.493 20:32:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:26.493 ************************************ 00:11:26.493 START TEST nvmf_zcopy 00:11:26.493 ************************************ 00:11:26.493 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:26.493 * Looking for test storage... 00:11:26.493 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:26.493 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.493 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:26.493 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.494 20:32:14 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:34.622 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:34.622 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:34.623 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:34.623 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:34.623 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:34.623 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:34.623 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:34.623 altname enp217s0f0np0 00:11:34.623 altname ens818f0np0 00:11:34.623 inet 192.168.100.8/24 scope global mlx_0_0 00:11:34.623 valid_lft forever preferred_lft forever 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:34.623 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:34.623 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:34.623 altname enp217s0f1np1 00:11:34.623 altname ens818f1np1 00:11:34.623 inet 192.168.100.9/24 scope global mlx_0_1 00:11:34.623 valid_lft forever preferred_lft forever 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:34.623 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:34.624 192.168.100.9' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:34.624 192.168.100.9' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:34.624 192.168.100.9' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1001957 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1001957 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1001957 ']' 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.624 20:32:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.624 [2024-07-26 20:32:22.874897] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:11:34.624 [2024-07-26 20:32:22.874955] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.624 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.624 [2024-07-26 20:32:22.970511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.624 [2024-07-26 20:32:23.021564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.624 [2024-07-26 20:32:23.021612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.624 [2024-07-26 20:32:23.021634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.624 [2024-07-26 20:32:23.021646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.624 [2024-07-26 20:32:23.021656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.624 [2024-07-26 20:32:23.021687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:34.624 Unsupported transport: rdma 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:34.624 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:34.624 nvmf_trace.0 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:34.885 rmmod nvme_rdma 00:11:34.885 rmmod nvme_fabrics 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1001957 ']' 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1001957 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1001957 ']' 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1001957 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1001957 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1001957' 00:11:34.885 killing process with pid 1001957 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1001957 00:11:34.885 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1001957 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:35.145 00:11:35.145 real 0m8.651s 00:11:35.145 user 0m2.828s 00:11:35.145 sys 0m6.438s 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.145 ************************************ 00:11:35.145 END TEST nvmf_zcopy 00:11:35.145 ************************************ 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:35.145 ************************************ 00:11:35.145 START TEST nvmf_nmic 00:11:35.145 ************************************ 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:35.145 * Looking for test storage... 00:11:35.145 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.145 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.405 20:32:23 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:43.537 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:43.537 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:43.537 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:43.537 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.537 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:43.538 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.538 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:43.538 altname enp217s0f0np0 00:11:43.538 altname ens818f0np0 00:11:43.538 inet 192.168.100.8/24 scope global mlx_0_0 00:11:43.538 valid_lft forever preferred_lft forever 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:43.538 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.538 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:43.538 altname enp217s0f1np1 00:11:43.538 altname ens818f1np1 00:11:43.538 inet 192.168.100.9/24 scope global mlx_0_1 00:11:43.538 valid_lft forever preferred_lft forever 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:43.538 192.168.100.9' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:43.538 192.168.100.9' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:43.538 192.168.100.9' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:43.538 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.539 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.539 20:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1006143 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1006143 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1006143 ']' 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.539 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:43.539 [2024-07-26 20:32:32.049317] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:11:43.539 [2024-07-26 20:32:32.049366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.799 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.799 [2024-07-26 20:32:32.135617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.799 [2024-07-26 20:32:32.176168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.799 [2024-07-26 20:32:32.176222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.799 [2024-07-26 20:32:32.176232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.799 [2024-07-26 20:32:32.176240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.799 [2024-07-26 20:32:32.176247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.799 [2024-07-26 20:32:32.176340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.799 [2024-07-26 20:32:32.176454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.799 [2024-07-26 20:32:32.176521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.799 [2024-07-26 20:32:32.176523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.367 20:32:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 [2024-07-26 20:32:32.937522] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x689ea0/0x68e390) succeed. 00:11:44.635 [2024-07-26 20:32:32.946779] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x68b4e0/0x6cfa20) succeed. 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 Malloc0 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 [2024-07-26 20:32:33.111535] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:44.635 test case1: single bdev can't be used in multiple subsystems 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 [2024-07-26 20:32:33.135268] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:44.635 [2024-07-26 20:32:33.135291] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:44.635 [2024-07-26 20:32:33.135301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.635 request: 00:11:44.635 { 00:11:44.635 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:44.635 "namespace": { 00:11:44.635 "bdev_name": "Malloc0", 00:11:44.635 "no_auto_visible": false 00:11:44.635 }, 00:11:44.635 "method": "nvmf_subsystem_add_ns", 00:11:44.635 "req_id": 1 00:11:44.635 } 00:11:44.635 Got JSON-RPC error response 00:11:44.635 response: 00:11:44.635 { 00:11:44.635 "code": -32602, 00:11:44.635 "message": "Invalid parameters" 00:11:44.635 } 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:44.635 Adding namespace failed - expected result. 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:44.635 test case2: host connect to nvmf target in multiple paths 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 [2024-07-26 20:32:33.151336] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.635 20:32:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:46.023 20:32:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:11:46.591 20:32:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.591 20:32:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:46.591 20:32:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.591 20:32:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:46.591 20:32:35 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:49.126 20:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:49.126 20:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:49.126 20:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.126 20:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:49.126 20:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.126 20:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:49.126 20:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:49.126 [global] 00:11:49.126 thread=1 00:11:49.126 invalidate=1 00:11:49.126 rw=write 00:11:49.126 time_based=1 00:11:49.126 runtime=1 00:11:49.126 ioengine=libaio 00:11:49.126 direct=1 00:11:49.126 bs=4096 00:11:49.126 iodepth=1 00:11:49.126 norandommap=0 00:11:49.126 numjobs=1 00:11:49.126 00:11:49.126 verify_dump=1 00:11:49.126 verify_backlog=512 00:11:49.126 verify_state_save=0 00:11:49.126 do_verify=1 00:11:49.126 verify=crc32c-intel 00:11:49.126 [job0] 00:11:49.126 filename=/dev/nvme0n1 00:11:49.126 Could not set queue depth (nvme0n1) 00:11:49.126 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:49.126 fio-3.35 00:11:49.126 Starting 1 thread 00:11:50.505 00:11:50.505 job0: (groupid=0, jobs=1): err= 0: pid=1007306: Fri Jul 26 20:32:38 2024 00:11:50.505 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:11:50.505 slat (nsec): min=8303, max=28778, avg=8965.97, stdev=849.26 00:11:50.505 clat (nsec): min=40551, max=82425, avg=58448.69, stdev=3317.22 00:11:50.505 lat (nsec): min=59024, max=91431, avg=67414.66, stdev=3379.57 00:11:50.505 clat percentiles (nsec): 00:11:50.505 | 1.00th=[51968], 5.00th=[53504], 10.00th=[54528], 20.00th=[55552], 00:11:50.505 | 30.00th=[56576], 40.00th=[57600], 50.00th=[58112], 60.00th=[59136], 00:11:50.505 | 70.00th=[60160], 80.00th=[61184], 90.00th=[62720], 95.00th=[64256], 00:11:50.505 | 99.00th=[67072], 99.50th=[68096], 99.90th=[72192], 99.95th=[75264], 00:11:50.505 | 99.99th=[82432] 00:11:50.505 write: IOPS=7274, BW=28.4MiB/s (29.8MB/s)(28.4MiB/1001msec); 0 zone resets 00:11:50.505 slat (nsec): min=8010, max=36405, avg=10669.18, stdev=1048.89 00:11:50.505 clat (nsec): min=39183, max=90266, avg=56530.95, stdev=3355.03 00:11:50.505 lat (usec): min=58, max=125, avg=67.20, stdev= 3.46 00:11:50.505 clat percentiles (nsec): 00:11:50.505 | 1.00th=[49920], 5.00th=[51456], 10.00th=[52480], 20.00th=[53504], 00:11:50.505 | 30.00th=[54528], 40.00th=[55552], 50.00th=[56576], 60.00th=[57088], 00:11:50.505 | 70.00th=[58112], 80.00th=[59136], 90.00th=[60672], 95.00th=[62208], 00:11:50.505 | 99.00th=[64768], 99.50th=[66048], 99.90th=[69120], 99.95th=[71168], 00:11:50.505 | 99.99th=[90624] 00:11:50.505 bw ( KiB/s): min=29120, max=29120, per=100.00%, avg=29120.00, stdev= 0.00, samples=1 00:11:50.505 iops : min= 7280, max= 7280, avg=7280.00, stdev= 0.00, samples=1 00:11:50.505 lat (usec) : 50=0.64%, 100=99.36% 00:11:50.505 cpu : usr=9.70%, sys=19.20%, ctx=14450, majf=0, minf=2 00:11:50.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.505 issued rwts: total=7168,7282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.505 00:11:50.505 Run status group 0 (all jobs): 00:11:50.505 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:11:50.505 WRITE: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=28.4MiB (29.8MB), run=1001-1001msec 00:11:50.505 00:11:50.505 Disk stats (read/write): 00:11:50.505 nvme0n1: ios=6381/6656, merge=0/0, ticks=311/317, in_queue=628, util=90.68% 00:11:50.505 20:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:52.411 rmmod nvme_rdma 00:11:52.411 rmmod nvme_fabrics 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1006143 ']' 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1006143 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1006143 ']' 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1006143 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1006143 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1006143' 00:11:52.411 killing process with pid 1006143 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1006143 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1006143 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:52.411 00:11:52.411 real 0m17.377s 00:11:52.411 user 0m45.286s 00:11:52.411 sys 0m7.298s 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.411 20:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.411 ************************************ 00:11:52.411 END TEST nvmf_nmic 00:11:52.411 ************************************ 00:11:52.670 20:32:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:52.670 20:32:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:52.670 20:32:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.670 20:32:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:52.670 ************************************ 00:11:52.670 START TEST nvmf_fio_target 00:11:52.670 ************************************ 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:52.670 * Looking for test storage... 00:11:52.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.670 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.671 20:32:41 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:02.652 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:02.652 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:02.652 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:02.652 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.652 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:02.653 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:02.653 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:02.653 altname enp217s0f0np0 00:12:02.653 altname ens818f0np0 00:12:02.653 inet 192.168.100.8/24 scope global mlx_0_0 00:12:02.653 valid_lft forever preferred_lft forever 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:02.653 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:02.653 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:02.653 altname enp217s0f1np1 00:12:02.653 altname ens818f1np1 00:12:02.653 inet 192.168.100.9/24 scope global mlx_0_1 00:12:02.653 valid_lft forever preferred_lft forever 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:02.653 192.168.100.9' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:02.653 192.168.100.9' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:02.653 192.168.100.9' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:02.653 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1011858 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1011858 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1011858 ']' 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.654 20:32:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.654 [2024-07-26 20:32:49.670334] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:12:02.654 [2024-07-26 20:32:49.670388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.654 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.654 [2024-07-26 20:32:49.758078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.654 [2024-07-26 20:32:49.796701] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.654 [2024-07-26 20:32:49.796749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.654 [2024-07-26 20:32:49.796758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.654 [2024-07-26 20:32:49.796766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.654 [2024-07-26 20:32:49.796789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.654 [2024-07-26 20:32:49.796846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.654 [2024-07-26 20:32:49.796938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.654 [2024-07-26 20:32:49.797027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.654 [2024-07-26 20:32:49.797029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.654 20:32:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:02.654 20:32:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:02.654 20:32:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:02.654 20:32:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:02.654 20:32:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.654 20:32:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.654 20:32:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:02.654 [2024-07-26 20:32:50.696887] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1817ea0/0x181c390) succeed. 00:12:02.654 [2024-07-26 20:32:50.706036] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18194e0/0x185da20) succeed. 00:12:02.654 20:32:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.654 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:02.654 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.913 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:02.913 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.913 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:02.913 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.172 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:03.172 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:03.431 20:32:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.690 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:03.690 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.690 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:03.690 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.950 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:03.950 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:04.208 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.467 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:04.467 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:04.467 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:04.467 20:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.726 20:32:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:04.985 [2024-07-26 20:32:53.315234] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:04.985 20:32:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:04.985 20:32:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:05.244 20:32:53 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:06.182 20:32:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:06.182 20:32:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:06.182 20:32:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.182 20:32:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:06.182 20:32:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:06.182 20:32:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:08.716 20:32:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:08.716 20:32:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:08.716 20:32:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.716 20:32:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:08.716 20:32:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.716 20:32:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:08.716 20:32:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:08.716 [global] 00:12:08.716 thread=1 00:12:08.716 invalidate=1 00:12:08.716 rw=write 00:12:08.716 time_based=1 00:12:08.716 runtime=1 00:12:08.716 ioengine=libaio 00:12:08.716 direct=1 00:12:08.716 bs=4096 00:12:08.716 iodepth=1 00:12:08.716 norandommap=0 00:12:08.716 numjobs=1 00:12:08.716 00:12:08.716 verify_dump=1 00:12:08.716 verify_backlog=512 00:12:08.716 verify_state_save=0 00:12:08.716 do_verify=1 00:12:08.716 verify=crc32c-intel 00:12:08.716 [job0] 00:12:08.716 filename=/dev/nvme0n1 00:12:08.716 [job1] 00:12:08.716 filename=/dev/nvme0n2 00:12:08.716 [job2] 00:12:08.716 filename=/dev/nvme0n3 00:12:08.716 [job3] 00:12:08.716 filename=/dev/nvme0n4 00:12:08.716 Could not set queue depth (nvme0n1) 00:12:08.716 Could not set queue depth (nvme0n2) 00:12:08.716 Could not set queue depth (nvme0n3) 00:12:08.716 Could not set queue depth (nvme0n4) 00:12:08.716 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.716 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.716 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.716 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:08.716 fio-3.35 00:12:08.716 Starting 4 threads 00:12:10.166 00:12:10.166 job0: (groupid=0, jobs=1): err= 0: pid=1013385: Fri Jul 26 20:32:58 2024 00:12:10.166 read: IOPS=3380, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:12:10.166 slat (nsec): min=8057, max=30251, avg=8983.95, stdev=914.13 00:12:10.166 clat (usec): min=70, max=196, avg=136.60, stdev=16.08 00:12:10.166 lat (usec): min=78, max=205, avg=145.58, stdev=16.06 00:12:10.166 clat percentiles (usec): 00:12:10.166 | 1.00th=[ 79], 5.00th=[ 114], 10.00th=[ 120], 20.00th=[ 126], 00:12:10.166 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:12:10.166 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 165], 00:12:10.166 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 194], 99.95th=[ 198], 00:12:10.166 | 99.99th=[ 198] 00:12:10.166 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:10.166 slat (nsec): min=9961, max=36569, avg=11081.30, stdev=1148.54 00:12:10.166 clat (usec): min=63, max=186, avg=125.90, stdev=22.39 00:12:10.166 lat (usec): min=73, max=196, avg=136.99, stdev=22.48 00:12:10.166 clat percentiles (usec): 00:12:10.166 | 1.00th=[ 70], 5.00th=[ 77], 10.00th=[ 85], 20.00th=[ 113], 00:12:10.166 | 30.00th=[ 121], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:12:10.166 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 161], 00:12:10.166 | 99.00th=[ 169], 99.50th=[ 172], 99.90th=[ 178], 99.95th=[ 182], 00:12:10.166 | 99.99th=[ 188] 00:12:10.167 bw ( KiB/s): min=16384, max=16384, per=28.60%, avg=16384.00, stdev= 0.00, samples=1 00:12:10.167 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:10.167 lat (usec) : 100=7.51%, 250=92.49% 00:12:10.167 cpu : usr=4.10%, sys=10.70%, ctx=6968, majf=0, minf=2 00:12:10.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.167 issued rwts: total=3384,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.167 job1: (groupid=0, jobs=1): err= 0: pid=1013386: Fri Jul 26 20:32:58 2024 00:12:10.167 read: IOPS=3363, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:12:10.167 slat (nsec): min=8075, max=32475, avg=9683.20, stdev=1850.17 00:12:10.167 clat (usec): min=68, max=193, avg=131.51, stdev=20.24 00:12:10.167 lat (usec): min=77, max=202, avg=141.20, stdev=20.50 00:12:10.167 clat percentiles (usec): 00:12:10.167 | 1.00th=[ 74], 5.00th=[ 80], 10.00th=[ 112], 20.00th=[ 122], 00:12:10.167 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:12:10.167 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 163], 00:12:10.167 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 184], 99.95th=[ 192], 00:12:10.167 | 99.99th=[ 194] 00:12:10.167 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:10.167 slat (nsec): min=9179, max=76796, avg=11904.06, stdev=2288.33 00:12:10.167 clat (usec): min=63, max=187, avg=129.63, stdev=17.37 00:12:10.167 lat (usec): min=78, max=199, avg=141.54, stdev=17.72 00:12:10.167 clat percentiles (usec): 00:12:10.167 | 1.00th=[ 76], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 117], 00:12:10.167 | 30.00th=[ 123], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 133], 00:12:10.167 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 153], 95.00th=[ 163], 00:12:10.167 | 99.00th=[ 169], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 184], 00:12:10.167 | 99.99th=[ 188] 00:12:10.167 bw ( KiB/s): min=16384, max=16384, per=28.60%, avg=16384.00, stdev= 0.00, samples=1 00:12:10.167 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:10.167 lat (usec) : 100=5.52%, 250=94.48% 00:12:10.167 cpu : usr=5.50%, sys=10.10%, ctx=6952, majf=0, minf=1 00:12:10.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.167 issued rwts: total=3367,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.167 job2: (groupid=0, jobs=1): err= 0: pid=1013387: Fri Jul 26 20:32:58 2024 00:12:10.167 read: IOPS=3425, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec) 00:12:10.167 slat (nsec): min=3276, max=33242, avg=8786.61, stdev=3489.25 00:12:10.167 clat (usec): min=75, max=193, avg=133.67, stdev=16.12 00:12:10.167 lat (usec): min=84, max=205, avg=142.46, stdev=16.66 00:12:10.167 clat percentiles (usec): 00:12:10.167 | 1.00th=[ 86], 5.00th=[ 102], 10.00th=[ 117], 20.00th=[ 124], 00:12:10.167 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:12:10.167 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:12:10.167 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 194], 00:12:10.167 | 99.99th=[ 194] 00:12:10.167 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:10.167 slat (nsec): min=4004, max=69785, avg=11627.54, stdev=3927.18 00:12:10.167 clat (usec): min=70, max=185, avg=126.58, stdev=16.19 00:12:10.167 lat (usec): min=82, max=216, avg=138.20, stdev=17.36 00:12:10.167 clat percentiles (usec): 00:12:10.167 | 1.00th=[ 83], 5.00th=[ 98], 10.00th=[ 109], 20.00th=[ 115], 00:12:10.167 | 30.00th=[ 120], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 131], 00:12:10.167 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 153], 00:12:10.167 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 180], 99.95th=[ 184], 00:12:10.167 | 99.99th=[ 186] 00:12:10.167 bw ( KiB/s): min=16120, max=16120, per=28.14%, avg=16120.00, stdev= 0.00, samples=1 00:12:10.167 iops : min= 4030, max= 4030, avg=4030.00, stdev= 0.00, samples=1 00:12:10.167 lat (usec) : 100=5.13%, 250=94.87% 00:12:10.167 cpu : usr=4.90%, sys=8.70%, ctx=7014, majf=0, minf=1 00:12:10.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.167 issued rwts: total=3429,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.167 job3: (groupid=0, jobs=1): err= 0: pid=1013388: Fri Jul 26 20:32:58 2024 00:12:10.167 read: IOPS=3403, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec) 00:12:10.167 slat (nsec): min=3320, max=62448, avg=10363.94, stdev=2934.79 00:12:10.167 clat (usec): min=73, max=239, avg=135.05, stdev=16.32 00:12:10.167 lat (usec): min=82, max=244, avg=145.41, stdev=16.10 00:12:10.167 clat percentiles (usec): 00:12:10.167 | 1.00th=[ 83], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 124], 00:12:10.167 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:12:10.167 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 165], 00:12:10.167 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 206], 99.95th=[ 210], 00:12:10.167 | 99.99th=[ 241] 00:12:10.167 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:12:10.167 slat (nsec): min=4009, max=48610, avg=11418.39, stdev=3163.39 00:12:10.167 clat (usec): min=67, max=185, avg=123.86, stdev=19.67 00:12:10.167 lat (usec): min=78, max=197, avg=135.28, stdev=20.23 00:12:10.167 clat percentiles (usec): 00:12:10.167 | 1.00th=[ 77], 5.00th=[ 84], 10.00th=[ 90], 20.00th=[ 111], 00:12:10.167 | 30.00th=[ 118], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 131], 00:12:10.167 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 153], 00:12:10.167 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 182], 00:12:10.167 | 99.99th=[ 186] 00:12:10.167 bw ( KiB/s): min=16384, max=16384, per=28.60%, avg=16384.00, stdev= 0.00, samples=1 00:12:10.167 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:10.167 lat (usec) : 100=7.85%, 250=92.15% 00:12:10.167 cpu : usr=5.00%, sys=10.80%, ctx=6991, majf=0, minf=1 00:12:10.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.167 issued rwts: total=3407,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.167 00:12:10.167 Run status group 0 (all jobs): 00:12:10.167 READ: bw=53.0MiB/s (55.6MB/s), 13.1MiB/s-13.4MiB/s (13.8MB/s-14.0MB/s), io=53.1MiB (55.7MB), run=1001-1001msec 00:12:10.167 WRITE: bw=55.9MiB/s (58.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=56.0MiB (58.7MB), run=1001-1001msec 00:12:10.167 00:12:10.167 Disk stats (read/write): 00:12:10.167 nvme0n1: ios=2847/3072, merge=0/0, ticks=369/356, in_queue=725, util=84.15% 00:12:10.167 nvme0n2: ios=2790/3072, merge=0/0, ticks=343/369, in_queue=712, util=85.35% 00:12:10.167 nvme0n3: ios=2681/3072, merge=0/0, ticks=331/370, in_queue=701, util=88.42% 00:12:10.167 nvme0n4: ios=2815/3072, merge=0/0, ticks=362/347, in_queue=709, util=89.47% 00:12:10.167 20:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:10.167 [global] 00:12:10.167 thread=1 00:12:10.167 invalidate=1 00:12:10.167 rw=randwrite 00:12:10.167 time_based=1 00:12:10.167 runtime=1 00:12:10.167 ioengine=libaio 00:12:10.167 direct=1 00:12:10.167 bs=4096 00:12:10.167 iodepth=1 00:12:10.167 norandommap=0 00:12:10.167 numjobs=1 00:12:10.167 00:12:10.167 verify_dump=1 00:12:10.167 verify_backlog=512 00:12:10.167 verify_state_save=0 00:12:10.167 do_verify=1 00:12:10.167 verify=crc32c-intel 00:12:10.167 [job0] 00:12:10.167 filename=/dev/nvme0n1 00:12:10.167 [job1] 00:12:10.167 filename=/dev/nvme0n2 00:12:10.167 [job2] 00:12:10.167 filename=/dev/nvme0n3 00:12:10.167 [job3] 00:12:10.167 filename=/dev/nvme0n4 00:12:10.167 Could not set queue depth (nvme0n1) 00:12:10.167 Could not set queue depth (nvme0n2) 00:12:10.167 Could not set queue depth (nvme0n3) 00:12:10.167 Could not set queue depth (nvme0n4) 00:12:10.426 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.426 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.426 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.426 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.426 fio-3.35 00:12:10.426 Starting 4 threads 00:12:11.805 00:12:11.805 job0: (groupid=0, jobs=1): err= 0: pid=1013819: Fri Jul 26 20:32:59 2024 00:12:11.805 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:11.805 slat (nsec): min=8049, max=20751, avg=8855.64, stdev=807.83 00:12:11.805 clat (usec): min=71, max=218, avg=150.88, stdev=16.88 00:12:11.805 lat (usec): min=80, max=228, avg=159.73, stdev=16.86 00:12:11.805 clat percentiles (usec): 00:12:11.805 | 1.00th=[ 84], 5.00th=[ 127], 10.00th=[ 139], 20.00th=[ 145], 00:12:11.805 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:12:11.805 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 172], 00:12:11.805 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 215], 99.95th=[ 217], 00:12:11.805 | 99.99th=[ 219] 00:12:11.805 write: IOPS=3308, BW=12.9MiB/s (13.6MB/s)(12.9MiB/1001msec); 0 zone resets 00:12:11.805 slat (nsec): min=7834, max=49373, avg=10845.81, stdev=1268.44 00:12:11.805 clat (usec): min=66, max=270, avg=138.36, stdev=20.70 00:12:11.805 lat (usec): min=76, max=282, avg=149.20, stdev=20.81 00:12:11.805 clat percentiles (usec): 00:12:11.805 | 1.00th=[ 79], 5.00th=[ 100], 10.00th=[ 112], 20.00th=[ 125], 00:12:11.805 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 145], 00:12:11.805 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 169], 00:12:11.805 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 204], 99.95th=[ 215], 00:12:11.805 | 99.99th=[ 269] 00:12:11.805 bw ( KiB/s): min=13440, max=13440, per=21.82%, avg=13440.00, stdev= 0.00, samples=1 00:12:11.805 iops : min= 3360, max= 3360, avg=3360.00, stdev= 0.00, samples=1 00:12:11.805 lat (usec) : 100=4.10%, 250=95.88%, 500=0.02% 00:12:11.805 cpu : usr=5.10%, sys=8.40%, ctx=6385, majf=0, minf=1 00:12:11.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:11.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.805 issued rwts: total=3072,3312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:11.805 job1: (groupid=0, jobs=1): err= 0: pid=1013820: Fri Jul 26 20:32:59 2024 00:12:11.805 read: IOPS=5571, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1001msec) 00:12:11.805 slat (nsec): min=7952, max=30054, avg=8803.70, stdev=890.85 00:12:11.805 clat (usec): min=64, max=158, avg=78.12, stdev= 6.77 00:12:11.805 lat (usec): min=72, max=167, avg=86.92, stdev= 6.87 00:12:11.805 clat percentiles (usec): 00:12:11.805 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 74], 00:12:11.805 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 79], 00:12:11.805 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 87], 00:12:11.805 | 99.00th=[ 115], 99.50th=[ 124], 99.90th=[ 131], 99.95th=[ 139], 00:12:11.805 | 99.99th=[ 159] 00:12:11.805 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:12:11.805 slat (nsec): min=9784, max=37504, avg=10393.34, stdev=961.85 00:12:11.805 clat (usec): min=60, max=143, avg=77.29, stdev=11.72 00:12:11.805 lat (usec): min=71, max=154, avg=87.69, stdev=11.86 00:12:11.805 clat percentiles (usec): 00:12:11.805 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 72], 00:12:11.805 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:12:11.805 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 84], 95.00th=[ 113], 00:12:11.805 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 131], 99.95th=[ 135], 00:12:11.805 | 99.99th=[ 145] 00:12:11.805 bw ( KiB/s): min=23536, max=23536, per=38.21%, avg=23536.00, stdev= 0.00, samples=1 00:12:11.805 iops : min= 5884, max= 5884, avg=5884.00, stdev= 0.00, samples=1 00:12:11.805 lat (usec) : 100=95.80%, 250=4.20% 00:12:11.805 cpu : usr=7.50%, sys=14.80%, ctx=11209, majf=0, minf=1 00:12:11.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:11.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.805 issued rwts: total=5577,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:11.805 job2: (groupid=0, jobs=1): err= 0: pid=1013821: Fri Jul 26 20:32:59 2024 00:12:11.805 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:11.805 slat (nsec): min=8356, max=31521, avg=10221.08, stdev=2661.53 00:12:11.805 clat (usec): min=78, max=220, avg=152.05, stdev=15.85 00:12:11.805 lat (usec): min=87, max=234, avg=162.27, stdev=16.40 00:12:11.805 clat percentiles (usec): 00:12:11.805 | 1.00th=[ 94], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 143], 00:12:11.805 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:12:11.805 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 182], 00:12:11.805 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 217], 99.95th=[ 217], 00:12:11.805 | 99.99th=[ 221] 00:12:11.805 write: IOPS=3155, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:12:11.805 slat (nsec): min=10096, max=36765, avg=12752.02, stdev=3349.26 00:12:11.805 clat (usec): min=70, max=217, avg=141.13, stdev=20.59 00:12:11.805 lat (usec): min=81, max=239, avg=153.88, stdev=21.57 00:12:11.805 clat percentiles (usec): 00:12:11.805 | 1.00th=[ 83], 5.00th=[ 109], 10.00th=[ 115], 20.00th=[ 128], 00:12:11.805 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 145], 00:12:11.805 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 165], 95.00th=[ 180], 00:12:11.805 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 208], 99.95th=[ 217], 00:12:11.805 | 99.99th=[ 219] 00:12:11.805 bw ( KiB/s): min=13192, max=13192, per=21.42%, avg=13192.00, stdev= 0.00, samples=1 00:12:11.805 iops : min= 3298, max= 3298, avg=3298.00, stdev= 0.00, samples=1 00:12:11.805 lat (usec) : 100=2.20%, 250=97.80% 00:12:11.805 cpu : usr=4.40%, sys=8.70%, ctx=6231, majf=0, minf=2 00:12:11.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:11.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.805 issued rwts: total=3072,3159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:11.805 job3: (groupid=0, jobs=1): err= 0: pid=1013822: Fri Jul 26 20:32:59 2024 00:12:11.805 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:12:11.805 slat (nsec): min=8481, max=37420, avg=9244.56, stdev=1085.34 00:12:11.805 clat (usec): min=75, max=219, avg=150.29, stdev=15.68 00:12:11.805 lat (usec): min=84, max=228, avg=159.54, stdev=15.69 00:12:11.805 clat percentiles (usec): 00:12:11.805 | 1.00th=[ 90], 5.00th=[ 128], 10.00th=[ 139], 20.00th=[ 145], 00:12:11.805 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:12:11.805 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 169], 00:12:11.805 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 215], 99.95th=[ 217], 00:12:11.805 | 99.99th=[ 221] 00:12:11.805 write: IOPS=3320, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1002msec); 0 zone resets 00:12:11.805 slat (nsec): min=7981, max=46578, avg=11031.78, stdev=1208.65 00:12:11.805 clat (usec): min=70, max=334, avg=137.58, stdev=19.75 00:12:11.805 lat (usec): min=80, max=345, avg=148.62, stdev=19.79 00:12:11.805 clat percentiles (usec): 00:12:11.805 | 1.00th=[ 82], 5.00th=[ 97], 10.00th=[ 111], 20.00th=[ 125], 00:12:11.805 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:12:11.805 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:12:11.805 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 202], 99.95th=[ 208], 00:12:11.805 | 99.99th=[ 334] 00:12:11.805 bw ( KiB/s): min=13088, max=13528, per=21.61%, avg=13308.00, stdev=311.13, samples=2 00:12:11.805 iops : min= 3272, max= 3382, avg=3327.00, stdev=77.78, samples=2 00:12:11.805 lat (usec) : 100=4.27%, 250=95.72%, 500=0.02% 00:12:11.805 cpu : usr=4.20%, sys=9.49%, ctx=6400, majf=0, minf=1 00:12:11.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:11.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.805 issued rwts: total=3072,3327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:11.805 00:12:11.805 Run status group 0 (all jobs): 00:12:11.805 READ: bw=57.7MiB/s (60.5MB/s), 12.0MiB/s-21.8MiB/s (12.6MB/s-22.8MB/s), io=57.8MiB (60.6MB), run=1001-1002msec 00:12:11.805 WRITE: bw=60.2MiB/s (63.1MB/s), 12.3MiB/s-22.0MiB/s (12.9MB/s-23.0MB/s), io=60.3MiB (63.2MB), run=1001-1002msec 00:12:11.805 00:12:11.805 Disk stats (read/write): 00:12:11.805 nvme0n1: ios=2609/2771, merge=0/0, ticks=358/350, in_queue=708, util=84.07% 00:12:11.805 nvme0n2: ios=4608/4702, merge=0/0, ticks=322/335, in_queue=657, util=85.10% 00:12:11.805 nvme0n3: ios=2560/2640, merge=0/0, ticks=364/343, in_queue=707, util=88.25% 00:12:11.805 nvme0n4: ios=2560/2797, merge=0/0, ticks=334/360, in_queue=694, util=89.39% 00:12:11.805 20:32:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:11.805 [global] 00:12:11.805 thread=1 00:12:11.805 invalidate=1 00:12:11.805 rw=write 00:12:11.805 time_based=1 00:12:11.805 runtime=1 00:12:11.805 ioengine=libaio 00:12:11.805 direct=1 00:12:11.805 bs=4096 00:12:11.805 iodepth=128 00:12:11.805 norandommap=0 00:12:11.805 numjobs=1 00:12:11.805 00:12:11.805 verify_dump=1 00:12:11.805 verify_backlog=512 00:12:11.805 verify_state_save=0 00:12:11.805 do_verify=1 00:12:11.805 verify=crc32c-intel 00:12:11.805 [job0] 00:12:11.805 filename=/dev/nvme0n1 00:12:11.805 [job1] 00:12:11.806 filename=/dev/nvme0n2 00:12:11.806 [job2] 00:12:11.806 filename=/dev/nvme0n3 00:12:11.806 [job3] 00:12:11.806 filename=/dev/nvme0n4 00:12:11.806 Could not set queue depth (nvme0n1) 00:12:11.806 Could not set queue depth (nvme0n2) 00:12:11.806 Could not set queue depth (nvme0n3) 00:12:11.806 Could not set queue depth (nvme0n4) 00:12:11.806 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:11.806 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:11.806 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:11.806 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:11.806 fio-3.35 00:12:11.806 Starting 4 threads 00:12:13.183 00:12:13.183 job0: (groupid=0, jobs=1): err= 0: pid=1014263: Fri Jul 26 20:33:01 2024 00:12:13.183 read: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec) 00:12:13.183 slat (usec): min=2, max=2659, avg=55.09, stdev=203.31 00:12:13.183 clat (usec): min=5862, max=14889, avg=7245.41, stdev=1044.24 00:12:13.183 lat (usec): min=5884, max=14897, avg=7300.50, stdev=1044.40 00:12:13.183 clat percentiles (usec): 00:12:13.183 | 1.00th=[ 6063], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6849], 00:12:13.183 | 30.00th=[ 6915], 40.00th=[ 6980], 50.00th=[ 6980], 60.00th=[ 7046], 00:12:13.183 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 8029], 95.00th=[ 8160], 00:12:13.183 | 99.00th=[14222], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:12:13.183 | 99.99th=[14877] 00:12:13.183 write: IOPS=9123, BW=35.6MiB/s (37.4MB/s)(35.7MiB/1003msec); 0 zone resets 00:12:13.183 slat (usec): min=2, max=1566, avg=54.14, stdev=197.50 00:12:13.183 clat (usec): min=2163, max=16805, avg=7002.90, stdev=1553.41 00:12:13.183 lat (usec): min=3165, max=16808, avg=7057.04, stdev=1559.79 00:12:13.183 clat percentiles (usec): 00:12:13.183 | 1.00th=[ 5669], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 6521], 00:12:13.183 | 30.00th=[ 6587], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6718], 00:12:13.183 | 70.00th=[ 6783], 80.00th=[ 6849], 90.00th=[ 7570], 95.00th=[ 7832], 00:12:13.183 | 99.00th=[14615], 99.50th=[14746], 99.90th=[16712], 99.95th=[16909], 00:12:13.183 | 99.99th=[16909] 00:12:13.183 bw ( KiB/s): min=34608, max=37584, per=32.16%, avg=36096.00, stdev=2104.35, samples=2 00:12:13.183 iops : min= 8652, max= 9396, avg=9024.00, stdev=526.09, samples=2 00:12:13.183 lat (msec) : 4=0.04%, 10=96.87%, 20=3.09% 00:12:13.183 cpu : usr=4.09%, sys=4.99%, ctx=1184, majf=0, minf=1 00:12:13.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:13.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.183 issued rwts: total=8704,9151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.183 job1: (groupid=0, jobs=1): err= 0: pid=1014264: Fri Jul 26 20:33:01 2024 00:12:13.183 read: IOPS=9161, BW=35.8MiB/s (37.5MB/s)(36.0MiB/1006msec) 00:12:13.183 slat (usec): min=2, max=1160, avg=53.06, stdev=194.49 00:12:13.183 clat (usec): min=5710, max=9186, avg=6996.49, stdev=480.65 00:12:13.183 lat (usec): min=5776, max=9191, avg=7049.55, stdev=480.18 00:12:13.183 clat percentiles (usec): 00:12:13.183 | 1.00th=[ 5997], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 6718], 00:12:13.183 | 30.00th=[ 6783], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 6980], 00:12:13.183 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7767], 95.00th=[ 8029], 00:12:13.183 | 99.00th=[ 8225], 99.50th=[ 8291], 99.90th=[ 8848], 99.95th=[ 9110], 00:12:13.183 | 99.99th=[ 9241] 00:12:13.183 write: IOPS=9494, BW=37.1MiB/s (38.9MB/s)(37.3MiB/1006msec); 0 zone resets 00:12:13.183 slat (usec): min=2, max=1675, avg=50.75, stdev=185.47 00:12:13.183 clat (usec): min=1357, max=11791, avg=6606.91, stdev=625.00 00:12:13.183 lat (usec): min=1370, max=11794, avg=6657.66, stdev=626.90 00:12:13.183 clat percentiles (usec): 00:12:13.183 | 1.00th=[ 5080], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 6390], 00:12:13.183 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6521], 60.00th=[ 6587], 00:12:13.183 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7373], 95.00th=[ 7570], 00:12:13.183 | 99.00th=[ 8160], 99.50th=[ 9241], 99.90th=[11731], 99.95th=[11731], 00:12:13.183 | 99.99th=[11731] 00:12:13.183 bw ( KiB/s): min=36272, max=39120, per=33.59%, avg=37696.00, stdev=2013.84, samples=2 00:12:13.183 iops : min= 9068, max= 9780, avg=9424.00, stdev=503.46, samples=2 00:12:13.183 lat (msec) : 2=0.03%, 4=0.30%, 10=99.44%, 20=0.23% 00:12:13.183 cpu : usr=2.99%, sys=6.67%, ctx=1214, majf=0, minf=1 00:12:13.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:13.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.183 issued rwts: total=9216,9551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.183 job2: (groupid=0, jobs=1): err= 0: pid=1014268: Fri Jul 26 20:33:01 2024 00:12:13.183 read: IOPS=2743, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1003msec) 00:12:13.183 slat (usec): min=2, max=8974, avg=174.01, stdev=937.14 00:12:13.183 clat (usec): min=1085, max=40390, avg=22072.54, stdev=10984.91 00:12:13.183 lat (usec): min=3826, max=40934, avg=22246.55, stdev=11098.45 00:12:13.183 clat percentiles (usec): 00:12:13.183 | 1.00th=[ 4555], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9765], 00:12:13.183 | 30.00th=[ 9896], 40.00th=[11338], 50.00th=[30278], 60.00th=[31065], 00:12:13.183 | 70.00th=[31327], 80.00th=[31589], 90.00th=[32113], 95.00th=[32637], 00:12:13.183 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:12:13.183 | 99.99th=[40633] 00:12:13.183 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:12:13.183 slat (usec): min=2, max=8961, avg=166.44, stdev=904.49 00:12:13.183 clat (usec): min=4815, max=40467, avg=21453.71, stdev=10219.63 00:12:13.183 lat (usec): min=4825, max=40490, avg=21620.15, stdev=10324.98 00:12:13.183 clat percentiles (usec): 00:12:13.183 | 1.00th=[ 5669], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9503], 00:12:13.183 | 30.00th=[11469], 40.00th=[14484], 50.00th=[28443], 60.00th=[30278], 00:12:13.183 | 70.00th=[30802], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:12:13.183 | 99.00th=[38011], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:12:13.183 | 99.99th=[40633] 00:12:13.183 bw ( KiB/s): min= 8192, max=16384, per=10.95%, avg=12288.00, stdev=5792.62, samples=2 00:12:13.183 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:12:13.183 lat (msec) : 2=0.02%, 4=0.33%, 10=28.64%, 20=16.55%, 50=54.46% 00:12:13.183 cpu : usr=1.20%, sys=2.89%, ctx=482, majf=0, minf=1 00:12:13.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:13.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.183 issued rwts: total=2752,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.183 job3: (groupid=0, jobs=1): err= 0: pid=1014269: Fri Jul 26 20:33:01 2024 00:12:13.183 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:12:13.183 slat (usec): min=2, max=4217, avg=79.16, stdev=327.24 00:12:13.183 clat (usec): min=6731, max=14659, avg=10321.27, stdev=958.57 00:12:13.183 lat (usec): min=7196, max=14663, avg=10400.43, stdev=984.94 00:12:13.183 clat percentiles (usec): 00:12:13.183 | 1.00th=[ 7832], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9634], 00:12:13.183 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10683], 00:12:13.183 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11207], 95.00th=[11338], 00:12:13.183 | 99.00th=[12911], 99.50th=[14091], 99.90th=[14615], 99.95th=[14615], 00:12:13.183 | 99.99th=[14615] 00:12:13.183 write: IOPS=6412, BW=25.0MiB/s (26.3MB/s)(25.2MiB/1006msec); 0 zone resets 00:12:13.183 slat (usec): min=2, max=4024, avg=76.34, stdev=320.44 00:12:13.183 clat (usec): min=3202, max=15179, avg=9952.69, stdev=1098.10 00:12:13.183 lat (usec): min=3215, max=15189, avg=10029.03, stdev=1122.42 00:12:13.183 clat percentiles (usec): 00:12:13.183 | 1.00th=[ 6521], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9372], 00:12:13.183 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:12:13.183 | 70.00th=[10552], 80.00th=[10814], 90.00th=[10945], 95.00th=[11207], 00:12:13.183 | 99.00th=[12780], 99.50th=[13566], 99.90th=[14353], 99.95th=[14353], 00:12:13.183 | 99.99th=[15139] 00:12:13.183 bw ( KiB/s): min=24576, max=26016, per=22.54%, avg=25296.00, stdev=1018.23, samples=2 00:12:13.183 iops : min= 6144, max= 6504, avg=6324.00, stdev=254.56, samples=2 00:12:13.183 lat (msec) : 4=0.11%, 10=39.69%, 20=60.20% 00:12:13.183 cpu : usr=1.99%, sys=5.57%, ctx=877, majf=0, minf=1 00:12:13.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:13.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.183 issued rwts: total=6144,6451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.183 00:12:13.183 Run status group 0 (all jobs): 00:12:13.183 READ: bw=104MiB/s (109MB/s), 10.7MiB/s-35.8MiB/s (11.2MB/s-37.5MB/s), io=105MiB (110MB), run=1003-1006msec 00:12:13.184 WRITE: bw=110MiB/s (115MB/s), 12.0MiB/s-37.1MiB/s (12.5MB/s-38.9MB/s), io=110MiB (116MB), run=1003-1006msec 00:12:13.184 00:12:13.184 Disk stats (read/write): 00:12:13.184 nvme0n1: ios=7729/7810, merge=0/0, ticks=26541/24994, in_queue=51535, util=84.05% 00:12:13.184 nvme0n2: ios=7680/8035, merge=0/0, ticks=51983/51405, in_queue=103388, util=85.00% 00:12:13.184 nvme0n3: ios=1784/2048, merge=0/0, ticks=16530/17933, in_queue=34463, util=88.34% 00:12:13.184 nvme0n4: ios=4936/5120, merge=0/0, ticks=51485/51515, in_queue=103000, util=89.47% 00:12:13.184 20:33:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:13.184 [global] 00:12:13.184 thread=1 00:12:13.184 invalidate=1 00:12:13.184 rw=randwrite 00:12:13.184 time_based=1 00:12:13.184 runtime=1 00:12:13.184 ioengine=libaio 00:12:13.184 direct=1 00:12:13.184 bs=4096 00:12:13.184 iodepth=128 00:12:13.184 norandommap=0 00:12:13.184 numjobs=1 00:12:13.184 00:12:13.184 verify_dump=1 00:12:13.184 verify_backlog=512 00:12:13.184 verify_state_save=0 00:12:13.184 do_verify=1 00:12:13.184 verify=crc32c-intel 00:12:13.184 [job0] 00:12:13.184 filename=/dev/nvme0n1 00:12:13.184 [job1] 00:12:13.184 filename=/dev/nvme0n2 00:12:13.184 [job2] 00:12:13.184 filename=/dev/nvme0n3 00:12:13.184 [job3] 00:12:13.184 filename=/dev/nvme0n4 00:12:13.184 Could not set queue depth (nvme0n1) 00:12:13.184 Could not set queue depth (nvme0n2) 00:12:13.184 Could not set queue depth (nvme0n3) 00:12:13.184 Could not set queue depth (nvme0n4) 00:12:13.442 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:13.442 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:13.442 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:13.442 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:13.442 fio-3.35 00:12:13.442 Starting 4 threads 00:12:14.819 00:12:14.819 job0: (groupid=0, jobs=1): err= 0: pid=1014793: Fri Jul 26 20:33:03 2024 00:12:14.819 read: IOPS=9013, BW=35.2MiB/s (36.9MB/s)(35.4MiB/1004msec) 00:12:14.819 slat (usec): min=2, max=3506, avg=54.13, stdev=207.02 00:12:14.819 clat (usec): min=3085, max=20606, avg=7131.45, stdev=1436.10 00:12:14.819 lat (usec): min=3523, max=20611, avg=7185.58, stdev=1438.71 00:12:14.819 clat percentiles (usec): 00:12:14.819 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 6783], 00:12:14.819 | 30.00th=[ 6849], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 6980], 00:12:14.819 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7242], 95.00th=[ 7373], 00:12:14.819 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19792], 99.95th=[20579], 00:12:14.819 | 99.99th=[20579] 00:12:14.819 write: IOPS=9179, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1004msec); 0 zone resets 00:12:14.819 slat (usec): min=2, max=1519, avg=51.31, stdev=179.37 00:12:14.819 clat (usec): min=5358, max=11292, avg=6781.19, stdev=914.90 00:12:14.819 lat (usec): min=5509, max=11471, avg=6832.50, stdev=915.34 00:12:14.819 clat percentiles (usec): 00:12:14.819 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 6456], 00:12:14.819 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6587], 60.00th=[ 6652], 00:12:14.819 | 70.00th=[ 6718], 80.00th=[ 6783], 90.00th=[ 6915], 95.00th=[ 9765], 00:12:14.819 | 99.00th=[10683], 99.50th=[10814], 99.90th=[10814], 99.95th=[10814], 00:12:14.819 | 99.99th=[11338] 00:12:14.819 bw ( KiB/s): min=36864, max=36864, per=35.37%, avg=36864.00, stdev= 0.00, samples=2 00:12:14.819 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:12:14.819 lat (msec) : 4=0.04%, 10=96.38%, 20=3.55%, 50=0.04% 00:12:14.819 cpu : usr=5.78%, sys=6.98%, ctx=1179, majf=0, minf=1 00:12:14.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:14.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.819 issued rwts: total=9050,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.819 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.819 job1: (groupid=0, jobs=1): err= 0: pid=1014797: Fri Jul 26 20:33:03 2024 00:12:14.819 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:12:14.819 slat (usec): min=2, max=2134, avg=115.92, stdev=397.55 00:12:14.819 clat (usec): min=12006, max=16789, avg=15106.72, stdev=539.20 00:12:14.819 lat (usec): min=12009, max=16793, avg=15222.64, stdev=375.13 00:12:14.819 clat percentiles (usec): 00:12:14.819 | 1.00th=[13173], 5.00th=[13829], 10.00th=[14615], 20.00th=[14877], 00:12:14.819 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15139], 60.00th=[15270], 00:12:14.819 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15664], 95.00th=[15795], 00:12:14.819 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16188], 99.95th=[16909], 00:12:14.819 | 99.99th=[16909] 00:12:14.819 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1004msec); 0 zone resets 00:12:14.819 slat (usec): min=2, max=3181, avg=110.60, stdev=383.76 00:12:14.819 clat (usec): min=1578, max=18840, avg=14185.43, stdev=1753.73 00:12:14.819 lat (usec): min=4400, max=18844, avg=14296.03, stdev=1720.21 00:12:14.819 clat percentiles (usec): 00:12:14.820 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[10683], 20.00th=[14091], 00:12:14.820 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[14746], 00:12:14.820 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15270], 95.00th=[15401], 00:12:14.820 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:12:14.820 | 99.99th=[18744] 00:12:14.820 bw ( KiB/s): min=17408, max=18288, per=17.12%, avg=17848.00, stdev=622.25, samples=2 00:12:14.820 iops : min= 4352, max= 4572, avg=4462.00, stdev=155.56, samples=2 00:12:14.820 lat (msec) : 2=0.01%, 10=1.19%, 20=98.80% 00:12:14.820 cpu : usr=2.49%, sys=4.59%, ctx=1931, majf=0, minf=1 00:12:14.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:14.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.820 issued rwts: total=4096,4590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.820 job2: (groupid=0, jobs=1): err= 0: pid=1014805: Fri Jul 26 20:33:03 2024 00:12:14.820 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:12:14.820 slat (usec): min=2, max=2114, avg=116.55, stdev=369.07 00:12:14.820 clat (usec): min=12804, max=16137, avg=15090.25, stdev=491.05 00:12:14.820 lat (usec): min=14447, max=16140, avg=15206.80, stdev=330.71 00:12:14.820 clat percentiles (usec): 00:12:14.820 | 1.00th=[13304], 5.00th=[14091], 10.00th=[14615], 20.00th=[14746], 00:12:14.820 | 30.00th=[15008], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:12:14.820 | 70.00th=[15401], 80.00th=[15401], 90.00th=[15664], 95.00th=[15795], 00:12:14.820 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16057], 99.95th=[16057], 00:12:14.820 | 99.99th=[16188] 00:12:14.820 write: IOPS=4338, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1004msec); 0 zone resets 00:12:14.820 slat (usec): min=2, max=3884, avg=116.15, stdev=375.13 00:12:14.820 clat (usec): min=1567, max=21204, avg=14874.07, stdev=1523.73 00:12:14.820 lat (usec): min=3884, max=21209, avg=14990.23, stdev=1485.80 00:12:14.820 clat percentiles (usec): 00:12:14.820 | 1.00th=[ 8455], 5.00th=[13435], 10.00th=[14091], 20.00th=[14353], 00:12:14.820 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[14877], 00:12:14.820 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15664], 95.00th=[17695], 00:12:14.820 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:12:14.820 | 99.99th=[21103] 00:12:14.820 bw ( KiB/s): min=16384, max=17440, per=16.22%, avg=16912.00, stdev=746.70, samples=2 00:12:14.820 iops : min= 4096, max= 4360, avg=4228.00, stdev=186.68, samples=2 00:12:14.820 lat (msec) : 2=0.01%, 4=0.15%, 10=0.57%, 20=99.24%, 50=0.02% 00:12:14.820 cpu : usr=2.19%, sys=4.59%, ctx=1887, majf=0, minf=1 00:12:14.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:14.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.820 issued rwts: total=4096,4356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.820 job3: (groupid=0, jobs=1): err= 0: pid=1014809: Fri Jul 26 20:33:03 2024 00:12:14.820 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:12:14.820 slat (usec): min=2, max=2110, avg=63.39, stdev=232.49 00:12:14.820 clat (usec): min=5894, max=12415, avg=8328.90, stdev=512.06 00:12:14.820 lat (usec): min=6019, max=12425, avg=8392.29, stdev=502.21 00:12:14.820 clat percentiles (usec): 00:12:14.820 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8094], 00:12:14.820 | 30.00th=[ 8225], 40.00th=[ 8291], 50.00th=[ 8356], 60.00th=[ 8455], 00:12:14.820 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8717], 95.00th=[ 8848], 00:12:14.820 | 99.00th=[10290], 99.50th=[10814], 99.90th=[11207], 99.95th=[11600], 00:12:14.820 | 99.99th=[12387] 00:12:14.820 write: IOPS=7977, BW=31.2MiB/s (32.7MB/s)(31.3MiB/1003msec); 0 zone resets 00:12:14.820 slat (usec): min=2, max=1769, avg=60.17, stdev=217.30 00:12:14.820 clat (usec): min=1966, max=11401, avg=7890.88, stdev=723.64 00:12:14.820 lat (usec): min=2665, max=11411, avg=7951.05, stdev=720.41 00:12:14.820 clat percentiles (usec): 00:12:14.820 | 1.00th=[ 5735], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 7635], 00:12:14.820 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8029], 00:12:14.820 | 70.00th=[ 8094], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 9110], 00:12:14.820 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[11338], 99.95th=[11338], 00:12:14.820 | 99.99th=[11338] 00:12:14.820 bw ( KiB/s): min=30464, max=32528, per=30.22%, avg=31496.00, stdev=1459.47, samples=2 00:12:14.820 iops : min= 7616, max= 8132, avg=7874.00, stdev=364.87, samples=2 00:12:14.820 lat (msec) : 2=0.01%, 4=0.20%, 10=98.78%, 20=1.01% 00:12:14.820 cpu : usr=3.69%, sys=7.39%, ctx=1010, majf=0, minf=1 00:12:14.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:14.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.820 issued rwts: total=7680,8001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.820 00:12:14.820 Run status group 0 (all jobs): 00:12:14.820 READ: bw=97.0MiB/s (102MB/s), 15.9MiB/s-35.2MiB/s (16.7MB/s-36.9MB/s), io=97.4MiB (102MB), run=1003-1004msec 00:12:14.820 WRITE: bw=102MiB/s (107MB/s), 16.9MiB/s-35.9MiB/s (17.8MB/s-37.6MB/s), io=102MiB (107MB), run=1003-1004msec 00:12:14.820 00:12:14.820 Disk stats (read/write): 00:12:14.820 nvme0n1: ios=7691/7680, merge=0/0, ticks=25717/24238, in_queue=49955, util=84.25% 00:12:14.820 nvme0n2: ios=3584/3661, merge=0/0, ticks=13384/12795, in_queue=26179, util=85.19% 00:12:14.820 nvme0n3: ios=3427/3584, merge=0/0, ticks=12842/13322, in_queue=26164, util=88.34% 00:12:14.820 nvme0n4: ios=6242/6656, merge=0/0, ticks=25530/25872, in_queue=51402, util=89.48% 00:12:14.820 20:33:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:14.820 20:33:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1015062 00:12:14.820 20:33:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:14.820 20:33:03 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:14.820 [global] 00:12:14.820 thread=1 00:12:14.820 invalidate=1 00:12:14.820 rw=read 00:12:14.820 time_based=1 00:12:14.820 runtime=10 00:12:14.820 ioengine=libaio 00:12:14.820 direct=1 00:12:14.820 bs=4096 00:12:14.820 iodepth=1 00:12:14.820 norandommap=1 00:12:14.820 numjobs=1 00:12:14.820 00:12:14.820 [job0] 00:12:14.820 filename=/dev/nvme0n1 00:12:14.820 [job1] 00:12:14.820 filename=/dev/nvme0n2 00:12:14.820 [job2] 00:12:14.820 filename=/dev/nvme0n3 00:12:14.820 [job3] 00:12:14.820 filename=/dev/nvme0n4 00:12:14.820 Could not set queue depth (nvme0n1) 00:12:14.820 Could not set queue depth (nvme0n2) 00:12:14.820 Could not set queue depth (nvme0n3) 00:12:14.820 Could not set queue depth (nvme0n4) 00:12:15.078 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.078 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.078 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.078 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.078 fio-3.35 00:12:15.078 Starting 4 threads 00:12:18.364 20:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:18.364 20:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:18.364 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=107835392, buflen=4096 00:12:18.364 fio: pid=1015250, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.364 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=81829888, buflen=4096 00:12:18.364 fio: pid=1015243, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.364 20:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.364 20:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:18.364 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=25612288, buflen=4096 00:12:18.364 fio: pid=1015222, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.364 20:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.364 20:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:18.623 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=56139776, buflen=4096 00:12:18.623 fio: pid=1015230, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.623 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.623 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:18.623 00:12:18.623 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1015222: Fri Jul 26 20:33:07 2024 00:12:18.624 read: IOPS=7604, BW=29.7MiB/s (31.1MB/s)(88.4MiB/2977msec) 00:12:18.624 slat (usec): min=4, max=23106, avg=12.26, stdev=211.18 00:12:18.624 clat (usec): min=45, max=3939, avg=117.54, stdev=39.31 00:12:18.624 lat (usec): min=56, max=23216, avg=129.80, stdev=214.73 00:12:18.624 clat percentiles (usec): 00:12:18.624 | 1.00th=[ 56], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 82], 00:12:18.624 | 30.00th=[ 100], 40.00th=[ 118], 50.00th=[ 125], 60.00th=[ 130], 00:12:18.624 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 163], 00:12:18.624 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 208], 99.95th=[ 210], 00:12:18.624 | 99.99th=[ 277] 00:12:18.624 bw ( KiB/s): min=26352, max=36664, per=23.92%, avg=29635.20, stdev=4099.08, samples=5 00:12:18.624 iops : min= 6588, max= 9166, avg=7408.80, stdev=1024.77, samples=5 00:12:18.624 lat (usec) : 50=0.09%, 100=29.98%, 250=69.90%, 500=0.01% 00:12:18.624 lat (msec) : 4=0.01% 00:12:18.624 cpu : usr=3.19%, sys=11.02%, ctx=22642, majf=0, minf=1 00:12:18.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.624 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.624 issued rwts: total=22638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.624 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1015230: Fri Jul 26 20:33:07 2024 00:12:18.624 read: IOPS=9409, BW=36.8MiB/s (38.5MB/s)(118MiB/3198msec) 00:12:18.624 slat (usec): min=3, max=15926, avg=11.43, stdev=159.81 00:12:18.624 clat (usec): min=39, max=20676, avg=92.96, stdev=122.39 00:12:18.624 lat (usec): min=59, max=20685, avg=104.39, stdev=201.38 00:12:18.624 clat percentiles (usec): 00:12:18.624 | 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 76], 00:12:18.624 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 84], 00:12:18.624 | 70.00th=[ 89], 80.00th=[ 116], 90.00th=[ 139], 95.00th=[ 153], 00:12:18.624 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 210], 99.95th=[ 215], 00:12:18.624 | 99.99th=[ 314] 00:12:18.624 bw ( KiB/s): min=33392, max=44160, per=30.21%, avg=37418.50, stdev=3705.61, samples=6 00:12:18.624 iops : min= 8348, max=11040, avg=9354.50, stdev=926.44, samples=6 00:12:18.624 lat (usec) : 50=0.01%, 100=76.25%, 250=23.71%, 500=0.01%, 1000=0.01% 00:12:18.624 lat (msec) : 2=0.01%, 50=0.01% 00:12:18.624 cpu : usr=4.41%, sys=12.54%, ctx=30100, majf=0, minf=1 00:12:18.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.624 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.624 issued rwts: total=30091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.624 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1015243: Fri Jul 26 20:33:07 2024 00:12:18.624 read: IOPS=7115, BW=27.8MiB/s (29.1MB/s)(78.0MiB/2808msec) 00:12:18.624 slat (usec): min=5, max=12703, avg=10.97, stdev=105.76 00:12:18.624 clat (usec): min=39, max=339, avg=127.19, stdev=21.64 00:12:18.624 lat (usec): min=67, max=12804, avg=138.16, stdev=107.71 00:12:18.624 clat percentiles (usec): 00:12:18.624 | 1.00th=[ 80], 5.00th=[ 87], 10.00th=[ 93], 20.00th=[ 113], 00:12:18.624 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 133], 00:12:18.624 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 161], 00:12:18.624 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 204], 99.95th=[ 206], 00:12:18.624 | 99.99th=[ 326] 00:12:18.624 bw ( KiB/s): min=27064, max=31968, per=23.03%, avg=28523.20, stdev=1987.73, samples=5 00:12:18.624 iops : min= 6766, max= 7992, avg=7130.80, stdev=496.93, samples=5 00:12:18.624 lat (usec) : 50=0.01%, 100=13.56%, 250=86.42%, 500=0.01% 00:12:18.624 cpu : usr=3.03%, sys=10.58%, ctx=19982, majf=0, minf=1 00:12:18.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.624 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.624 issued rwts: total=19979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.624 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1015250: Fri Jul 26 20:33:07 2024 00:12:18.624 read: IOPS=9968, BW=38.9MiB/s (40.8MB/s)(103MiB/2641msec) 00:12:18.624 slat (nsec): min=8139, max=35488, avg=9109.26, stdev=831.06 00:12:18.624 clat (usec): min=66, max=185, avg=89.06, stdev= 7.94 00:12:18.624 lat (usec): min=80, max=194, avg=98.17, stdev= 7.99 00:12:18.624 clat percentiles (usec): 00:12:18.624 | 1.00th=[ 78], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:12:18.624 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 90], 00:12:18.624 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 101], 00:12:18.624 | 99.00th=[ 126], 99.50th=[ 133], 99.90th=[ 141], 99.95th=[ 147], 00:12:18.624 | 99.99th=[ 167] 00:12:18.624 bw ( KiB/s): min=38896, max=41024, per=32.60%, avg=40377.60, stdev=855.69, samples=5 00:12:18.624 iops : min= 9724, max=10256, avg=10094.40, stdev=213.92, samples=5 00:12:18.624 lat (usec) : 100=93.76%, 250=6.23% 00:12:18.624 cpu : usr=4.05%, sys=14.43%, ctx=26328, majf=0, minf=2 00:12:18.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.624 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.624 issued rwts: total=26328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.624 00:12:18.624 Run status group 0 (all jobs): 00:12:18.624 READ: bw=121MiB/s (127MB/s), 27.8MiB/s-38.9MiB/s (29.1MB/s-40.8MB/s), io=387MiB (406MB), run=2641-3198msec 00:12:18.624 00:12:18.624 Disk stats (read/write): 00:12:18.624 nvme0n1: ios=21074/0, merge=0/0, ticks=2413/0, in_queue=2413, util=92.79% 00:12:18.624 nvme0n2: ios=28775/0, merge=0/0, ticks=2473/0, in_queue=2473, util=93.71% 00:12:18.624 nvme0n3: ios=18362/0, merge=0/0, ticks=2167/0, in_queue=2167, util=96.02% 00:12:18.624 nvme0n4: ios=26040/0, merge=0/0, ticks=2129/0, in_queue=2129, util=96.42% 00:12:18.884 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.884 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:18.884 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.884 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:19.143 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.143 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:19.402 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.402 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:19.660 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:19.660 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1015062 00:12:19.660 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:19.660 20:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:20.597 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:20.597 nvmf hotplug test: fio failed as expected 00:12:20.598 20:33:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.598 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:20.598 rmmod nvme_rdma 00:12:20.857 rmmod nvme_fabrics 00:12:20.857 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.857 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:20.857 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:20.857 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1011858 ']' 00:12:20.857 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1011858 00:12:20.857 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1011858 ']' 00:12:20.857 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1011858 00:12:20.858 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:20.858 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.858 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1011858 00:12:20.858 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.858 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.858 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1011858' 00:12:20.858 killing process with pid 1011858 00:12:20.858 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1011858 00:12:20.858 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1011858 00:12:21.117 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.117 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:21.117 00:12:21.117 real 0m28.458s 00:12:21.117 user 2m6.903s 00:12:21.117 sys 0m11.709s 00:12:21.117 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.118 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.118 ************************************ 00:12:21.118 END TEST nvmf_fio_target 00:12:21.118 ************************************ 00:12:21.118 20:33:09 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:21.118 20:33:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.118 20:33:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.118 20:33:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:21.118 ************************************ 00:12:21.118 START TEST nvmf_bdevio 00:12:21.118 ************************************ 00:12:21.118 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:21.377 * Looking for test storage... 00:12:21.378 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.378 20:33:09 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:12:29.502 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:29.503 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:29.503 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:29.503 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:29.503 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:29.503 20:33:17 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:29.503 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:29.503 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:29.503 altname enp217s0f0np0 00:12:29.503 altname ens818f0np0 00:12:29.503 inet 192.168.100.8/24 scope global mlx_0_0 00:12:29.503 valid_lft forever preferred_lft forever 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:29.503 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:29.763 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:29.763 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:29.763 altname enp217s0f1np1 00:12:29.763 altname ens818f1np1 00:12:29.763 inet 192.168.100.9/24 scope global mlx_0_1 00:12:29.763 valid_lft forever preferred_lft forever 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:29.763 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:29.764 192.168.100.9' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:29.764 192.168.100.9' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:29.764 192.168.100.9' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1020891 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1020891 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1020891 ']' 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.764 20:33:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:29.764 [2024-07-26 20:33:18.205050] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:12:29.764 [2024-07-26 20:33:18.205102] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.764 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.764 [2024-07-26 20:33:18.290055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.023 [2024-07-26 20:33:18.329933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.023 [2024-07-26 20:33:18.329972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.023 [2024-07-26 20:33:18.329985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.023 [2024-07-26 20:33:18.329994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.023 [2024-07-26 20:33:18.330001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.023 [2024-07-26 20:33:18.330116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:30.023 [2024-07-26 20:33:18.330232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:30.023 [2024-07-26 20:33:18.330340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.023 [2024-07-26 20:33:18.330341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.591 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.591 [2024-07-26 20:33:19.099485] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x94c720/0x950c10) succeed. 00:12:30.591 [2024-07-26 20:33:19.108954] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x94dd60/0x9922a0) succeed. 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.850 Malloc0 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.850 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:30.851 [2024-07-26 20:33:19.274443] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:30.851 { 00:12:30.851 "params": { 00:12:30.851 "name": "Nvme$subsystem", 00:12:30.851 "trtype": "$TEST_TRANSPORT", 00:12:30.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:30.851 "adrfam": "ipv4", 00:12:30.851 "trsvcid": "$NVMF_PORT", 00:12:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:30.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:30.851 "hdgst": ${hdgst:-false}, 00:12:30.851 "ddgst": ${ddgst:-false} 00:12:30.851 }, 00:12:30.851 "method": "bdev_nvme_attach_controller" 00:12:30.851 } 00:12:30.851 EOF 00:12:30.851 )") 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:30.851 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:30.851 "params": { 00:12:30.851 "name": "Nvme1", 00:12:30.851 "trtype": "rdma", 00:12:30.851 "traddr": "192.168.100.8", 00:12:30.851 "adrfam": "ipv4", 00:12:30.851 "trsvcid": "4420", 00:12:30.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:30.851 "hdgst": false, 00:12:30.851 "ddgst": false 00:12:30.851 }, 00:12:30.851 "method": "bdev_nvme_attach_controller" 00:12:30.851 }' 00:12:30.851 [2024-07-26 20:33:19.324110] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:12:30.851 [2024-07-26 20:33:19.324159] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021032 ] 00:12:30.851 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.144 [2024-07-26 20:33:19.411960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.144 [2024-07-26 20:33:19.453579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.144 [2024-07-26 20:33:19.453679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.144 [2024-07-26 20:33:19.453684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.144 I/O targets: 00:12:31.144 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:31.144 00:12:31.144 00:12:31.144 CUnit - A unit testing framework for C - Version 2.1-3 00:12:31.144 http://cunit.sourceforge.net/ 00:12:31.144 00:12:31.144 00:12:31.144 Suite: bdevio tests on: Nvme1n1 00:12:31.144 Test: blockdev write read block ...passed 00:12:31.144 Test: blockdev write zeroes read block ...passed 00:12:31.144 Test: blockdev write zeroes read no split ...passed 00:12:31.144 Test: blockdev write zeroes read split ...passed 00:12:31.144 Test: blockdev write zeroes read split partial ...passed 00:12:31.144 Test: blockdev reset ...[2024-07-26 20:33:19.659552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:31.144 [2024-07-26 20:33:19.682394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:31.405 [2024-07-26 20:33:19.709221] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:31.405 passed 00:12:31.405 Test: blockdev write read 8 blocks ...passed 00:12:31.405 Test: blockdev write read size > 128k ...passed 00:12:31.405 Test: blockdev write read invalid size ...passed 00:12:31.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:31.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:31.405 Test: blockdev write read max offset ...passed 00:12:31.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:31.405 Test: blockdev writev readv 8 blocks ...passed 00:12:31.405 Test: blockdev writev readv 30 x 1block ...passed 00:12:31.405 Test: blockdev writev readv block ...passed 00:12:31.405 Test: blockdev writev readv size > 128k ...passed 00:12:31.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:31.405 Test: blockdev comparev and writev ...[2024-07-26 20:33:19.712112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.405 [2024-07-26 20:33:19.712143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:31.405 [2024-07-26 20:33:19.712156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.405 [2024-07-26 20:33:19.712165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:31.405 [2024-07-26 20:33:19.712327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.405 [2024-07-26 20:33:19.712338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:31.405 [2024-07-26 20:33:19.712349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.405 [2024-07-26 20:33:19.712358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:31.405 [2024-07-26 20:33:19.712521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.405 [2024-07-26 20:33:19.712532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:31.405 [2024-07-26 20:33:19.712543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.405 [2024-07-26 20:33:19.712552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:31.405 [2024-07-26 20:33:19.712714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.405 [2024-07-26 20:33:19.712725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:31.405 [2024-07-26 20:33:19.712736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:31.406 [2024-07-26 20:33:19.712745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:31.406 passed 00:12:31.406 Test: blockdev nvme passthru rw ...passed 00:12:31.406 Test: blockdev nvme passthru vendor specific ...[2024-07-26 20:33:19.713008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:31.406 [2024-07-26 20:33:19.713020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:31.406 [2024-07-26 20:33:19.713063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:31.406 [2024-07-26 20:33:19.713073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:31.406 [2024-07-26 20:33:19.713121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:31.406 [2024-07-26 20:33:19.713131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:31.406 [2024-07-26 20:33:19.713179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:31.406 [2024-07-26 20:33:19.713189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:31.406 passed 00:12:31.406 Test: blockdev nvme admin passthru ...passed 00:12:31.406 Test: blockdev copy ...passed 00:12:31.406 00:12:31.406 Run Summary: Type Total Ran Passed Failed Inactive 00:12:31.406 suites 1 1 n/a 0 0 00:12:31.406 tests 23 23 23 0 0 00:12:31.406 asserts 152 152 152 0 n/a 00:12:31.406 00:12:31.406 Elapsed time = 0.171 seconds 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:31.406 rmmod nvme_rdma 00:12:31.406 rmmod nvme_fabrics 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1020891 ']' 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1020891 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1020891 ']' 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1020891 00:12:31.406 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:31.665 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.665 20:33:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1020891 00:12:31.665 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:31.665 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:31.665 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1020891' 00:12:31.665 killing process with pid 1020891 00:12:31.665 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1020891 00:12:31.665 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1020891 00:12:31.924 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.924 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:31.924 00:12:31.924 real 0m10.711s 00:12:31.924 user 0m11.136s 00:12:31.924 sys 0m7.026s 00:12:31.924 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.924 20:33:20 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:31.924 ************************************ 00:12:31.924 END TEST nvmf_bdevio 00:12:31.924 ************************************ 00:12:31.924 20:33:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:31.924 00:12:31.924 real 4m31.395s 00:12:31.924 user 11m1.830s 00:12:31.924 sys 1m51.260s 00:12:31.924 20:33:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.924 20:33:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:31.924 ************************************ 00:12:31.924 END TEST nvmf_target_core 00:12:31.924 ************************************ 00:12:31.925 20:33:20 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:31.925 20:33:20 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.925 20:33:20 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.925 20:33:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:31.925 ************************************ 00:12:31.925 START TEST nvmf_target_extra 00:12:31.925 ************************************ 00:12:31.925 20:33:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:32.185 * Looking for test storage... 00:12:32.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.185 ************************************ 00:12:32.185 START TEST nvmf_example 00:12:32.185 ************************************ 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:32.185 * Looking for test storage... 00:12:32.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.185 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.186 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.445 20:33:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:40.574 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:40.574 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:40.574 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:40.574 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:12:40.574 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # uname 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:40.575 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:40.575 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:40.575 altname enp217s0f0np0 00:12:40.575 altname ens818f0np0 00:12:40.575 inet 192.168.100.8/24 scope global mlx_0_0 00:12:40.575 valid_lft forever preferred_lft forever 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:40.575 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:40.575 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:40.575 altname enp217s0f1np1 00:12:40.575 altname ens818f1np1 00:12:40.575 inet 192.168.100.9/24 scope global mlx_0_1 00:12:40.575 valid_lft forever preferred_lft forever 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:40.575 192.168.100.9' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:40.575 192.168.100.9' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:40.575 192.168.100.9' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1025277 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1025277 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1025277 ']' 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:40.575 20:33:28 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:40.575 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:41.511 20:33:29 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:41.511 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.724 Initializing NVMe Controllers 00:12:53.724 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:53.724 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:53.724 Initialization complete. Launching workers. 00:12:53.724 ======================================================== 00:12:53.724 Latency(us) 00:12:53.724 Device Information : IOPS MiB/s Average min max 00:12:53.724 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24400.32 95.31 2624.54 622.27 13042.56 00:12:53.724 ======================================================== 00:12:53.724 Total : 24400.32 95.31 2624.54 622.27 13042.56 00:12:53.724 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:53.724 rmmod nvme_rdma 00:12:53.724 rmmod nvme_fabrics 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1025277 ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1025277 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1025277 ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1025277 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1025277 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1025277' 00:12:53.724 killing process with pid 1025277 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1025277 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1025277 00:12:53.724 nvmf threads initialize successfully 00:12:53.724 bdev subsystem init successfully 00:12:53.724 created a nvmf target service 00:12:53.724 create targets's poll groups done 00:12:53.724 all subsystems of target started 00:12:53.724 nvmf target is running 00:12:53.724 all subsystems of target stopped 00:12:53.724 destroy targets's poll groups done 00:12:53.724 destroyed the nvmf target service 00:12:53.724 bdev subsystem finish successfully 00:12:53.724 nvmf threads destroy successfully 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 00:12:53.724 real 0m21.009s 00:12:53.724 user 0m52.338s 00:12:53.724 sys 0m6.778s 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 ************************************ 00:12:53.724 END TEST nvmf_example 00:12:53.724 ************************************ 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.724 ************************************ 00:12:53.724 START TEST nvmf_filesystem 00:12:53.724 ************************************ 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:53.724 * Looking for test storage... 00:12:53.724 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:53.724 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:53.725 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:53.725 #define SPDK_CONFIG_H 00:12:53.725 #define SPDK_CONFIG_APPS 1 00:12:53.726 #define SPDK_CONFIG_ARCH native 00:12:53.726 #undef SPDK_CONFIG_ASAN 00:12:53.726 #undef SPDK_CONFIG_AVAHI 00:12:53.726 #undef SPDK_CONFIG_CET 00:12:53.726 #define SPDK_CONFIG_COVERAGE 1 00:12:53.726 #define SPDK_CONFIG_CROSS_PREFIX 00:12:53.726 #undef SPDK_CONFIG_CRYPTO 00:12:53.726 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:53.726 #undef SPDK_CONFIG_CUSTOMOCF 00:12:53.726 #undef SPDK_CONFIG_DAOS 00:12:53.726 #define SPDK_CONFIG_DAOS_DIR 00:12:53.726 #define SPDK_CONFIG_DEBUG 1 00:12:53.726 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:53.726 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:12:53.726 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:12:53.726 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:12:53.726 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:53.726 #undef SPDK_CONFIG_DPDK_UADK 00:12:53.726 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:53.726 #define SPDK_CONFIG_EXAMPLES 1 00:12:53.726 #undef SPDK_CONFIG_FC 00:12:53.726 #define SPDK_CONFIG_FC_PATH 00:12:53.726 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:53.726 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:53.726 #undef SPDK_CONFIG_FUSE 00:12:53.726 #undef SPDK_CONFIG_FUZZER 00:12:53.726 #define SPDK_CONFIG_FUZZER_LIB 00:12:53.726 #undef SPDK_CONFIG_GOLANG 00:12:53.726 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:53.726 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:53.726 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:53.726 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:53.726 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:53.726 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:53.726 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:53.726 #define SPDK_CONFIG_IDXD 1 00:12:53.726 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:53.726 #undef SPDK_CONFIG_IPSEC_MB 00:12:53.726 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:53.726 #define SPDK_CONFIG_ISAL 1 00:12:53.726 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:53.726 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:53.726 #define SPDK_CONFIG_LIBDIR 00:12:53.726 #undef SPDK_CONFIG_LTO 00:12:53.726 #define SPDK_CONFIG_MAX_LCORES 128 00:12:53.726 #define SPDK_CONFIG_NVME_CUSE 1 00:12:53.726 #undef SPDK_CONFIG_OCF 00:12:53.726 #define SPDK_CONFIG_OCF_PATH 00:12:53.726 #define SPDK_CONFIG_OPENSSL_PATH 00:12:53.726 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:53.726 #define SPDK_CONFIG_PGO_DIR 00:12:53.726 #undef SPDK_CONFIG_PGO_USE 00:12:53.726 #define SPDK_CONFIG_PREFIX /usr/local 00:12:53.726 #undef SPDK_CONFIG_RAID5F 00:12:53.726 #undef SPDK_CONFIG_RBD 00:12:53.726 #define SPDK_CONFIG_RDMA 1 00:12:53.726 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:53.726 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:53.726 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:53.726 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:53.726 #define SPDK_CONFIG_SHARED 1 00:12:53.726 #undef SPDK_CONFIG_SMA 00:12:53.726 #define SPDK_CONFIG_TESTS 1 00:12:53.726 #undef SPDK_CONFIG_TSAN 00:12:53.726 #define SPDK_CONFIG_UBLK 1 00:12:53.726 #define SPDK_CONFIG_UBSAN 1 00:12:53.726 #undef SPDK_CONFIG_UNIT_TESTS 00:12:53.726 #undef SPDK_CONFIG_URING 00:12:53.726 #define SPDK_CONFIG_URING_PATH 00:12:53.726 #undef SPDK_CONFIG_URING_ZNS 00:12:53.726 #undef SPDK_CONFIG_USDT 00:12:53.726 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:53.726 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:53.726 #undef SPDK_CONFIG_VFIO_USER 00:12:53.726 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:53.726 #define SPDK_CONFIG_VHOST 1 00:12:53.726 #define SPDK_CONFIG_VIRTIO 1 00:12:53.726 #undef SPDK_CONFIG_VTUNE 00:12:53.726 #define SPDK_CONFIG_VTUNE_DIR 00:12:53.726 #define SPDK_CONFIG_WERROR 1 00:12:53.726 #define SPDK_CONFIG_WPDK_DIR 00:12:53.726 #undef SPDK_CONFIG_XNVME 00:12:53.726 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:53.726 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:12:53.727 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=rdma 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1027607 ]] 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1027607 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:53.728 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.7Lt3S0 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7Lt3S0/tests/target /tmp/spdk.7Lt3S0 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=919109632 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4365320192 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=49360535552 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742276608 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12381741056 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30805581824 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=65556480 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12325023744 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348456960 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23433216 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30865612800 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5525504 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6174220288 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174224384 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:53.729 * Looking for test storage... 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=49360535552 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=14596333568 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.729 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.729 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.730 20:33:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.730 20:33:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:53.730 20:33:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:53.730 20:33:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.730 20:33:42 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:01.855 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:01.856 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:01.856 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:01.856 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:01.856 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:01.856 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:01.856 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:01.856 altname enp217s0f0np0 00:13:01.856 altname ens818f0np0 00:13:01.856 inet 192.168.100.8/24 scope global mlx_0_0 00:13:01.856 valid_lft forever preferred_lft forever 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:01.856 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:01.856 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:01.856 altname enp217s0f1np1 00:13:01.856 altname ens818f1np1 00:13:01.856 inet 192.168.100.9/24 scope global mlx_0_1 00:13:01.856 valid_lft forever preferred_lft forever 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:13:01.856 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:01.857 192.168.100.9' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:01.857 192.168.100.9' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:01.857 192.168.100.9' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:01.857 ************************************ 00:13:01.857 START TEST nvmf_filesystem_no_in_capsule 00:13:01.857 ************************************ 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.857 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1031532 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1031532 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1031532 ']' 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.116 20:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.116 [2024-07-26 20:33:50.452808] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:13:02.117 [2024-07-26 20:33:50.452855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.117 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.117 [2024-07-26 20:33:50.537981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.117 [2024-07-26 20:33:50.578580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.117 [2024-07-26 20:33:50.578622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.117 [2024-07-26 20:33:50.578642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.117 [2024-07-26 20:33:50.578653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.117 [2024-07-26 20:33:50.578663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.117 [2024-07-26 20:33:50.578727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.117 [2024-07-26 20:33:50.578823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.117 [2024-07-26 20:33:50.578907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.117 [2024-07-26 20:33:50.578910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.052 [2024-07-26 20:33:51.317135] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:03.052 [2024-07-26 20:33:51.338718] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x180dea0/0x1812390) succeed. 00:13:03.052 [2024-07-26 20:33:51.348032] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x180f4e0/0x1853a20) succeed. 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.052 Malloc1 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.052 [2024-07-26 20:33:51.585887] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.052 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:03.310 { 00:13:03.310 "name": "Malloc1", 00:13:03.310 "aliases": [ 00:13:03.310 "50f34f7f-7205-42f5-be3c-d1e209664b6d" 00:13:03.310 ], 00:13:03.310 "product_name": "Malloc disk", 00:13:03.310 "block_size": 512, 00:13:03.310 "num_blocks": 1048576, 00:13:03.310 "uuid": "50f34f7f-7205-42f5-be3c-d1e209664b6d", 00:13:03.310 "assigned_rate_limits": { 00:13:03.310 "rw_ios_per_sec": 0, 00:13:03.310 "rw_mbytes_per_sec": 0, 00:13:03.310 "r_mbytes_per_sec": 0, 00:13:03.310 "w_mbytes_per_sec": 0 00:13:03.310 }, 00:13:03.310 "claimed": true, 00:13:03.310 "claim_type": "exclusive_write", 00:13:03.310 "zoned": false, 00:13:03.310 "supported_io_types": { 00:13:03.310 "read": true, 00:13:03.310 "write": true, 00:13:03.310 "unmap": true, 00:13:03.310 "flush": true, 00:13:03.310 "reset": true, 00:13:03.310 "nvme_admin": false, 00:13:03.310 "nvme_io": false, 00:13:03.310 "nvme_io_md": false, 00:13:03.310 "write_zeroes": true, 00:13:03.310 "zcopy": true, 00:13:03.310 "get_zone_info": false, 00:13:03.310 "zone_management": false, 00:13:03.310 "zone_append": false, 00:13:03.310 "compare": false, 00:13:03.310 "compare_and_write": false, 00:13:03.310 "abort": true, 00:13:03.310 "seek_hole": false, 00:13:03.310 "seek_data": false, 00:13:03.310 "copy": true, 00:13:03.310 "nvme_iov_md": false 00:13:03.310 }, 00:13:03.310 "memory_domains": [ 00:13:03.310 { 00:13:03.310 "dma_device_id": "system", 00:13:03.310 "dma_device_type": 1 00:13:03.310 }, 00:13:03.310 { 00:13:03.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.310 "dma_device_type": 2 00:13:03.310 } 00:13:03.310 ], 00:13:03.310 "driver_specific": {} 00:13:03.310 } 00:13:03.310 ]' 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:03.310 20:33:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:04.246 20:33:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.246 20:33:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.246 20:33:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.246 20:33:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.246 20:33:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:06.221 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:06.480 20:33:54 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:07.859 20:33:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:07.859 20:33:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:07.859 20:33:55 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.859 ************************************ 00:13:07.859 START TEST filesystem_ext4 00:13:07.859 ************************************ 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:07.859 mke2fs 1.46.5 (30-Dec-2021) 00:13:07.859 Discarding device blocks: 0/522240 done 00:13:07.859 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:07.859 Filesystem UUID: 0ef36c57-5eaa-4eec-bae7-4e9613a97a9e 00:13:07.859 Superblock backups stored on blocks: 00:13:07.859 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:07.859 00:13:07.859 Allocating group tables: 0/64 done 00:13:07.859 Writing inode tables: 0/64 done 00:13:07.859 Creating journal (8192 blocks): done 00:13:07.859 Writing superblocks and filesystem accounting information: 0/64 done 00:13:07.859 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1031532 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:07.859 00:13:07.859 real 0m0.193s 00:13:07.859 user 0m0.027s 00:13:07.859 sys 0m0.082s 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:07.859 ************************************ 00:13:07.859 END TEST filesystem_ext4 00:13:07.859 ************************************ 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.859 ************************************ 00:13:07.859 START TEST filesystem_btrfs 00:13:07.859 ************************************ 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:07.859 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:08.119 btrfs-progs v6.6.2 00:13:08.119 See https://btrfs.readthedocs.io for more information. 00:13:08.119 00:13:08.119 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:08.119 NOTE: several default settings have changed in version 5.15, please make sure 00:13:08.119 this does not affect your deployments: 00:13:08.119 - DUP for metadata (-m dup) 00:13:08.119 - enabled no-holes (-O no-holes) 00:13:08.119 - enabled free-space-tree (-R free-space-tree) 00:13:08.119 00:13:08.119 Label: (null) 00:13:08.119 UUID: 54b4261a-d626-49a5-a863-b6fe6755a4a6 00:13:08.119 Node size: 16384 00:13:08.119 Sector size: 4096 00:13:08.119 Filesystem size: 510.00MiB 00:13:08.119 Block group profiles: 00:13:08.119 Data: single 8.00MiB 00:13:08.119 Metadata: DUP 32.00MiB 00:13:08.119 System: DUP 8.00MiB 00:13:08.119 SSD detected: yes 00:13:08.119 Zoned device: no 00:13:08.119 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:08.119 Runtime features: free-space-tree 00:13:08.119 Checksum: crc32c 00:13:08.119 Number of devices: 1 00:13:08.119 Devices: 00:13:08.119 ID SIZE PATH 00:13:08.119 1 510.00MiB /dev/nvme0n1p1 00:13:08.119 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1031532 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:08.119 00:13:08.119 real 0m0.267s 00:13:08.119 user 0m0.032s 00:13:08.119 sys 0m0.145s 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:08.119 ************************************ 00:13:08.119 END TEST filesystem_btrfs 00:13:08.119 ************************************ 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.119 ************************************ 00:13:08.119 START TEST filesystem_xfs 00:13:08.119 ************************************ 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:08.119 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:08.379 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:08.379 = sectsz=512 attr=2, projid32bit=1 00:13:08.379 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:08.379 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:08.379 data = bsize=4096 blocks=130560, imaxpct=25 00:13:08.379 = sunit=0 swidth=0 blks 00:13:08.379 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:08.379 log =internal log bsize=4096 blocks=16384, version=2 00:13:08.379 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:08.379 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:08.379 Discarding blocks...Done. 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1031532 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:08.379 00:13:08.379 real 0m0.205s 00:13:08.379 user 0m0.026s 00:13:08.379 sys 0m0.080s 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:08.379 ************************************ 00:13:08.379 END TEST filesystem_xfs 00:13:08.379 ************************************ 00:13:08.379 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:08.638 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:08.638 20:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1031532 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1031532 ']' 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1031532 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.576 20:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1031532 00:13:09.576 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.576 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.576 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1031532' 00:13:09.576 killing process with pid 1031532 00:13:09.576 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1031532 00:13:09.576 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1031532 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:10.145 00:13:10.145 real 0m7.989s 00:13:10.145 user 0m31.248s 00:13:10.145 sys 0m1.263s 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.145 ************************************ 00:13:10.145 END TEST nvmf_filesystem_no_in_capsule 00:13:10.145 ************************************ 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:10.145 ************************************ 00:13:10.145 START TEST nvmf_filesystem_in_capsule 00:13:10.145 ************************************ 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1033214 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1033214 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1033214 ']' 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.145 20:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.145 [2024-07-26 20:33:58.529362] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:13:10.145 [2024-07-26 20:33:58.529411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.145 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.145 [2024-07-26 20:33:58.614833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.145 [2024-07-26 20:33:58.654739] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.145 [2024-07-26 20:33:58.654778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.145 [2024-07-26 20:33:58.654793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.145 [2024-07-26 20:33:58.654804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.145 [2024-07-26 20:33:58.654813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.145 [2024-07-26 20:33:58.654878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.145 [2024-07-26 20:33:58.654896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.145 [2024-07-26 20:33:58.654985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.145 [2024-07-26 20:33:58.654988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.082 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.083 [2024-07-26 20:33:59.405850] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xab9ea0/0xabe390) succeed. 00:13:11.083 [2024-07-26 20:33:59.415083] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xabb4e0/0xaffa20) succeed. 00:13:11.083 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.083 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:11.083 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.083 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.342 Malloc1 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.342 [2024-07-26 20:33:59.677448] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.342 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:11.342 { 00:13:11.342 "name": "Malloc1", 00:13:11.342 "aliases": [ 00:13:11.342 "ec7abec5-759f-40a3-b244-8d23ce2af464" 00:13:11.342 ], 00:13:11.342 "product_name": "Malloc disk", 00:13:11.342 "block_size": 512, 00:13:11.342 "num_blocks": 1048576, 00:13:11.342 "uuid": "ec7abec5-759f-40a3-b244-8d23ce2af464", 00:13:11.342 "assigned_rate_limits": { 00:13:11.342 "rw_ios_per_sec": 0, 00:13:11.342 "rw_mbytes_per_sec": 0, 00:13:11.342 "r_mbytes_per_sec": 0, 00:13:11.342 "w_mbytes_per_sec": 0 00:13:11.342 }, 00:13:11.342 "claimed": true, 00:13:11.342 "claim_type": "exclusive_write", 00:13:11.342 "zoned": false, 00:13:11.342 "supported_io_types": { 00:13:11.342 "read": true, 00:13:11.342 "write": true, 00:13:11.342 "unmap": true, 00:13:11.342 "flush": true, 00:13:11.342 "reset": true, 00:13:11.342 "nvme_admin": false, 00:13:11.342 "nvme_io": false, 00:13:11.342 "nvme_io_md": false, 00:13:11.342 "write_zeroes": true, 00:13:11.342 "zcopy": true, 00:13:11.342 "get_zone_info": false, 00:13:11.342 "zone_management": false, 00:13:11.342 "zone_append": false, 00:13:11.342 "compare": false, 00:13:11.342 "compare_and_write": false, 00:13:11.342 "abort": true, 00:13:11.342 "seek_hole": false, 00:13:11.342 "seek_data": false, 00:13:11.342 "copy": true, 00:13:11.342 "nvme_iov_md": false 00:13:11.342 }, 00:13:11.342 "memory_domains": [ 00:13:11.342 { 00:13:11.342 "dma_device_id": "system", 00:13:11.342 "dma_device_type": 1 00:13:11.342 }, 00:13:11.342 { 00:13:11.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.342 "dma_device_type": 2 00:13:11.342 } 00:13:11.342 ], 00:13:11.342 "driver_specific": {} 00:13:11.342 } 00:13:11.342 ]' 00:13:11.343 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:11.343 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:11.343 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:11.343 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:11.343 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:11.343 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:11.343 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:11.343 20:33:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:12.279 20:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.280 20:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.280 20:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.280 20:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:12.280 20:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:14.811 20:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:14.811 20:34:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.744 ************************************ 00:13:15.744 START TEST filesystem_in_capsule_ext4 00:13:15.744 ************************************ 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:15.744 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:15.745 mke2fs 1.46.5 (30-Dec-2021) 00:13:15.745 Discarding device blocks: 0/522240 done 00:13:15.745 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:15.745 Filesystem UUID: f86c6b53-d0fe-4f3c-ba93-16547bc1de57 00:13:15.745 Superblock backups stored on blocks: 00:13:15.745 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:15.745 00:13:15.745 Allocating group tables: 0/64 done 00:13:15.745 Writing inode tables: 0/64 done 00:13:15.745 Creating journal (8192 blocks): done 00:13:15.745 Writing superblocks and filesystem accounting information: 0/64 done 00:13:15.745 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:15.745 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1033214 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.004 00:13:16.004 real 0m0.190s 00:13:16.004 user 0m0.026s 00:13:16.004 sys 0m0.082s 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:16.004 ************************************ 00:13:16.004 END TEST filesystem_in_capsule_ext4 00:13:16.004 ************************************ 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.004 ************************************ 00:13:16.004 START TEST filesystem_in_capsule_btrfs 00:13:16.004 ************************************ 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:16.004 btrfs-progs v6.6.2 00:13:16.004 See https://btrfs.readthedocs.io for more information. 00:13:16.004 00:13:16.004 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:16.004 NOTE: several default settings have changed in version 5.15, please make sure 00:13:16.004 this does not affect your deployments: 00:13:16.004 - DUP for metadata (-m dup) 00:13:16.004 - enabled no-holes (-O no-holes) 00:13:16.004 - enabled free-space-tree (-R free-space-tree) 00:13:16.004 00:13:16.004 Label: (null) 00:13:16.004 UUID: 52c3c27d-c629-4af0-803a-880162b4d7f4 00:13:16.004 Node size: 16384 00:13:16.004 Sector size: 4096 00:13:16.004 Filesystem size: 510.00MiB 00:13:16.004 Block group profiles: 00:13:16.004 Data: single 8.00MiB 00:13:16.004 Metadata: DUP 32.00MiB 00:13:16.004 System: DUP 8.00MiB 00:13:16.004 SSD detected: yes 00:13:16.004 Zoned device: no 00:13:16.004 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:16.004 Runtime features: free-space-tree 00:13:16.004 Checksum: crc32c 00:13:16.004 Number of devices: 1 00:13:16.004 Devices: 00:13:16.004 ID SIZE PATH 00:13:16.004 1 510.00MiB /dev/nvme0n1p1 00:13:16.004 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:16.004 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1033214 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.263 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.264 00:13:16.264 real 0m0.268s 00:13:16.264 user 0m0.026s 00:13:16.264 sys 0m0.151s 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:16.264 ************************************ 00:13:16.264 END TEST filesystem_in_capsule_btrfs 00:13:16.264 ************************************ 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.264 ************************************ 00:13:16.264 START TEST filesystem_in_capsule_xfs 00:13:16.264 ************************************ 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:16.264 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:16.523 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:16.523 = sectsz=512 attr=2, projid32bit=1 00:13:16.523 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:16.523 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:16.523 data = bsize=4096 blocks=130560, imaxpct=25 00:13:16.523 = sunit=0 swidth=0 blks 00:13:16.523 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:16.523 log =internal log bsize=4096 blocks=16384, version=2 00:13:16.523 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:16.523 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:16.523 Discarding blocks...Done. 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1033214 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.523 00:13:16.523 real 0m0.207s 00:13:16.523 user 0m0.030s 00:13:16.523 sys 0m0.078s 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.523 20:34:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:16.523 ************************************ 00:13:16.523 END TEST filesystem_in_capsule_xfs 00:13:16.523 ************************************ 00:13:16.523 20:34:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:16.523 20:34:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:16.523 20:34:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.460 20:34:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.460 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:17.460 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:17.460 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1033214 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1033214 ']' 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1033214 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1033214 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1033214' 00:13:17.719 killing process with pid 1033214 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1033214 00:13:17.719 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1033214 00:13:17.979 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:17.979 00:13:17.979 real 0m8.039s 00:13:17.979 user 0m31.361s 00:13:17.979 sys 0m1.306s 00:13:17.979 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.979 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.979 ************************************ 00:13:17.979 END TEST nvmf_filesystem_in_capsule 00:13:17.979 ************************************ 00:13:18.237 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:18.237 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.237 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:18.237 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:18.238 rmmod nvme_rdma 00:13:18.238 rmmod nvme_fabrics 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:18.238 00:13:18.238 real 0m24.914s 00:13:18.238 user 1m5.169s 00:13:18.238 sys 0m9.173s 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:18.238 ************************************ 00:13:18.238 END TEST nvmf_filesystem 00:13:18.238 ************************************ 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.238 ************************************ 00:13:18.238 START TEST nvmf_target_discovery 00:13:18.238 ************************************ 00:13:18.238 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:18.238 * Looking for test storage... 00:13:18.238 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.497 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.498 20:34:06 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.514 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:28.515 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:28.515 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:28.515 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:28.515 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:28.515 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:28.516 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.516 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:28.516 altname enp217s0f0np0 00:13:28.516 altname ens818f0np0 00:13:28.516 inet 192.168.100.8/24 scope global mlx_0_0 00:13:28.516 valid_lft forever preferred_lft forever 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:28.516 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:28.516 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:28.516 altname enp217s0f1np1 00:13:28.516 altname ens818f1np1 00:13:28.516 inet 192.168.100.9/24 scope global mlx_0_1 00:13:28.516 valid_lft forever preferred_lft forever 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:28.516 192.168.100.9' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:28.516 192.168.100.9' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:28.516 192.168.100.9' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1038775 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1038775 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1038775 ']' 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.516 20:34:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.516 [2024-07-26 20:34:15.577790] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:13:28.516 [2024-07-26 20:34:15.577839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.516 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.516 [2024-07-26 20:34:15.659619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.516 [2024-07-26 20:34:15.698731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.516 [2024-07-26 20:34:15.698775] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.516 [2024-07-26 20:34:15.698789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.516 [2024-07-26 20:34:15.698800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.516 [2024-07-26 20:34:15.698810] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.516 [2024-07-26 20:34:15.698869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.516 [2024-07-26 20:34:15.698962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.516 [2024-07-26 20:34:15.699052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.516 [2024-07-26 20:34:15.699056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.516 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.516 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 [2024-07-26 20:34:16.461982] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xca6ea0/0xcab390) succeed. 00:13:28.517 [2024-07-26 20:34:16.471344] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xca84e0/0xceca20) succeed. 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 Null1 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 [2024-07-26 20:34:16.633768] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 Null2 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 Null3 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 Null4 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.517 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:13:28.517 00:13:28.517 Discovery Log Number of Records 6, Generation counter 6 00:13:28.517 =====Discovery Log Entry 0====== 00:13:28.517 trtype: rdma 00:13:28.517 adrfam: ipv4 00:13:28.517 subtype: current discovery subsystem 00:13:28.517 treq: not required 00:13:28.518 portid: 0 00:13:28.518 trsvcid: 4420 00:13:28.518 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:28.518 traddr: 192.168.100.8 00:13:28.518 eflags: explicit discovery connections, duplicate discovery information 00:13:28.518 rdma_prtype: not specified 00:13:28.518 rdma_qptype: connected 00:13:28.518 rdma_cms: rdma-cm 00:13:28.518 rdma_pkey: 0x0000 00:13:28.518 =====Discovery Log Entry 1====== 00:13:28.518 trtype: rdma 00:13:28.518 adrfam: ipv4 00:13:28.518 subtype: nvme subsystem 00:13:28.518 treq: not required 00:13:28.518 portid: 0 00:13:28.518 trsvcid: 4420 00:13:28.518 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:28.518 traddr: 192.168.100.8 00:13:28.518 eflags: none 00:13:28.518 rdma_prtype: not specified 00:13:28.518 rdma_qptype: connected 00:13:28.518 rdma_cms: rdma-cm 00:13:28.518 rdma_pkey: 0x0000 00:13:28.518 =====Discovery Log Entry 2====== 00:13:28.518 trtype: rdma 00:13:28.518 adrfam: ipv4 00:13:28.518 subtype: nvme subsystem 00:13:28.518 treq: not required 00:13:28.518 portid: 0 00:13:28.518 trsvcid: 4420 00:13:28.518 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:28.518 traddr: 192.168.100.8 00:13:28.518 eflags: none 00:13:28.518 rdma_prtype: not specified 00:13:28.518 rdma_qptype: connected 00:13:28.518 rdma_cms: rdma-cm 00:13:28.518 rdma_pkey: 0x0000 00:13:28.518 =====Discovery Log Entry 3====== 00:13:28.518 trtype: rdma 00:13:28.518 adrfam: ipv4 00:13:28.518 subtype: nvme subsystem 00:13:28.518 treq: not required 00:13:28.518 portid: 0 00:13:28.518 trsvcid: 4420 00:13:28.518 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:28.518 traddr: 192.168.100.8 00:13:28.518 eflags: none 00:13:28.518 rdma_prtype: not specified 00:13:28.518 rdma_qptype: connected 00:13:28.518 rdma_cms: rdma-cm 00:13:28.518 rdma_pkey: 0x0000 00:13:28.518 =====Discovery Log Entry 4====== 00:13:28.518 trtype: rdma 00:13:28.518 adrfam: ipv4 00:13:28.518 subtype: nvme subsystem 00:13:28.518 treq: not required 00:13:28.518 portid: 0 00:13:28.518 trsvcid: 4420 00:13:28.518 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:28.518 traddr: 192.168.100.8 00:13:28.518 eflags: none 00:13:28.518 rdma_prtype: not specified 00:13:28.518 rdma_qptype: connected 00:13:28.518 rdma_cms: rdma-cm 00:13:28.518 rdma_pkey: 0x0000 00:13:28.518 =====Discovery Log Entry 5====== 00:13:28.518 trtype: rdma 00:13:28.518 adrfam: ipv4 00:13:28.518 subtype: discovery subsystem referral 00:13:28.518 treq: not required 00:13:28.518 portid: 0 00:13:28.518 trsvcid: 4430 00:13:28.518 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:28.518 traddr: 192.168.100.8 00:13:28.518 eflags: none 00:13:28.518 rdma_prtype: unrecognized 00:13:28.518 rdma_qptype: unrecognized 00:13:28.518 rdma_cms: unrecognized 00:13:28.518 rdma_pkey: 0x0000 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:28.518 Perform nvmf subsystem discovery via RPC 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 [ 00:13:28.518 { 00:13:28.518 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:28.518 "subtype": "Discovery", 00:13:28.518 "listen_addresses": [ 00:13:28.518 { 00:13:28.518 "trtype": "RDMA", 00:13:28.518 "adrfam": "IPv4", 00:13:28.518 "traddr": "192.168.100.8", 00:13:28.518 "trsvcid": "4420" 00:13:28.518 } 00:13:28.518 ], 00:13:28.518 "allow_any_host": true, 00:13:28.518 "hosts": [] 00:13:28.518 }, 00:13:28.518 { 00:13:28.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.518 "subtype": "NVMe", 00:13:28.518 "listen_addresses": [ 00:13:28.518 { 00:13:28.518 "trtype": "RDMA", 00:13:28.518 "adrfam": "IPv4", 00:13:28.518 "traddr": "192.168.100.8", 00:13:28.518 "trsvcid": "4420" 00:13:28.518 } 00:13:28.518 ], 00:13:28.518 "allow_any_host": true, 00:13:28.518 "hosts": [], 00:13:28.518 "serial_number": "SPDK00000000000001", 00:13:28.518 "model_number": "SPDK bdev Controller", 00:13:28.518 "max_namespaces": 32, 00:13:28.518 "min_cntlid": 1, 00:13:28.518 "max_cntlid": 65519, 00:13:28.518 "namespaces": [ 00:13:28.518 { 00:13:28.518 "nsid": 1, 00:13:28.518 "bdev_name": "Null1", 00:13:28.518 "name": "Null1", 00:13:28.518 "nguid": "F3269A86FFA1474288B290B58D444FC1", 00:13:28.518 "uuid": "f3269a86-ffa1-4742-88b2-90b58d444fc1" 00:13:28.518 } 00:13:28.518 ] 00:13:28.518 }, 00:13:28.518 { 00:13:28.518 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:28.518 "subtype": "NVMe", 00:13:28.518 "listen_addresses": [ 00:13:28.518 { 00:13:28.518 "trtype": "RDMA", 00:13:28.518 "adrfam": "IPv4", 00:13:28.518 "traddr": "192.168.100.8", 00:13:28.518 "trsvcid": "4420" 00:13:28.518 } 00:13:28.518 ], 00:13:28.518 "allow_any_host": true, 00:13:28.518 "hosts": [], 00:13:28.518 "serial_number": "SPDK00000000000002", 00:13:28.518 "model_number": "SPDK bdev Controller", 00:13:28.518 "max_namespaces": 32, 00:13:28.518 "min_cntlid": 1, 00:13:28.518 "max_cntlid": 65519, 00:13:28.518 "namespaces": [ 00:13:28.518 { 00:13:28.518 "nsid": 1, 00:13:28.518 "bdev_name": "Null2", 00:13:28.518 "name": "Null2", 00:13:28.518 "nguid": "9C4B9C3BEDCA411895CA65BE52502798", 00:13:28.518 "uuid": "9c4b9c3b-edca-4118-95ca-65be52502798" 00:13:28.518 } 00:13:28.518 ] 00:13:28.518 }, 00:13:28.518 { 00:13:28.518 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:28.518 "subtype": "NVMe", 00:13:28.518 "listen_addresses": [ 00:13:28.518 { 00:13:28.518 "trtype": "RDMA", 00:13:28.518 "adrfam": "IPv4", 00:13:28.518 "traddr": "192.168.100.8", 00:13:28.518 "trsvcid": "4420" 00:13:28.518 } 00:13:28.518 ], 00:13:28.518 "allow_any_host": true, 00:13:28.518 "hosts": [], 00:13:28.518 "serial_number": "SPDK00000000000003", 00:13:28.518 "model_number": "SPDK bdev Controller", 00:13:28.518 "max_namespaces": 32, 00:13:28.518 "min_cntlid": 1, 00:13:28.518 "max_cntlid": 65519, 00:13:28.518 "namespaces": [ 00:13:28.518 { 00:13:28.518 "nsid": 1, 00:13:28.518 "bdev_name": "Null3", 00:13:28.518 "name": "Null3", 00:13:28.518 "nguid": "767AEA26579C45F596ECA155B75429B9", 00:13:28.518 "uuid": "767aea26-579c-45f5-96ec-a155b75429b9" 00:13:28.518 } 00:13:28.518 ] 00:13:28.518 }, 00:13:28.518 { 00:13:28.518 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:28.518 "subtype": "NVMe", 00:13:28.518 "listen_addresses": [ 00:13:28.518 { 00:13:28.518 "trtype": "RDMA", 00:13:28.518 "adrfam": "IPv4", 00:13:28.518 "traddr": "192.168.100.8", 00:13:28.518 "trsvcid": "4420" 00:13:28.518 } 00:13:28.518 ], 00:13:28.518 "allow_any_host": true, 00:13:28.518 "hosts": [], 00:13:28.518 "serial_number": "SPDK00000000000004", 00:13:28.518 "model_number": "SPDK bdev Controller", 00:13:28.518 "max_namespaces": 32, 00:13:28.518 "min_cntlid": 1, 00:13:28.518 "max_cntlid": 65519, 00:13:28.518 "namespaces": [ 00:13:28.518 { 00:13:28.518 "nsid": 1, 00:13:28.518 "bdev_name": "Null4", 00:13:28.518 "name": "Null4", 00:13:28.518 "nguid": "ADBF094746174ECD836B126DEC6A5CF2", 00:13:28.518 "uuid": "adbf0947-4617-4ecd-836b-126dec6a5cf2" 00:13:28.518 } 00:13:28.518 ] 00:13:28.518 } 00:13:28.518 ] 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:28.518 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.519 20:34:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:28.519 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:28.519 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:28.519 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:28.519 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.519 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:28.519 rmmod nvme_rdma 00:13:28.519 rmmod nvme_fabrics 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1038775 ']' 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1038775 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1038775 ']' 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1038775 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1038775 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1038775' 00:13:28.797 killing process with pid 1038775 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1038775 00:13:28.797 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1038775 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:29.057 00:13:29.057 real 0m10.674s 00:13:29.057 user 0m9.154s 00:13:29.057 sys 0m7.036s 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.057 ************************************ 00:13:29.057 END TEST nvmf_target_discovery 00:13:29.057 ************************************ 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.057 ************************************ 00:13:29.057 START TEST nvmf_referrals 00:13:29.057 ************************************ 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:29.057 * Looking for test storage... 00:13:29.057 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.057 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:29.058 20:34:17 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:37.182 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:37.182 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:37.182 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:37.182 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:37.182 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:37.183 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:37.183 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:37.183 altname enp217s0f0np0 00:13:37.183 altname ens818f0np0 00:13:37.183 inet 192.168.100.8/24 scope global mlx_0_0 00:13:37.183 valid_lft forever preferred_lft forever 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:37.183 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:37.183 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:37.183 altname enp217s0f1np1 00:13:37.183 altname ens818f1np1 00:13:37.183 inet 192.168.100.9/24 scope global mlx_0_1 00:13:37.183 valid_lft forever preferred_lft forever 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:37.183 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:37.442 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:37.443 192.168.100.9' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:37.443 192.168.100.9' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:37.443 192.168.100.9' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1043212 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1043212 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1043212 ']' 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.443 20:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.443 [2024-07-26 20:34:25.898379] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:13:37.443 [2024-07-26 20:34:25.898439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.443 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.443 [2024-07-26 20:34:25.984134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.702 [2024-07-26 20:34:26.026448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.702 [2024-07-26 20:34:26.026488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.702 [2024-07-26 20:34:26.026502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.702 [2024-07-26 20:34:26.026514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.702 [2024-07-26 20:34:26.026525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.702 [2024-07-26 20:34:26.026579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.702 [2024-07-26 20:34:26.026678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.702 [2024-07-26 20:34:26.026703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.702 [2024-07-26 20:34:26.026706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.269 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.269 [2024-07-26 20:34:26.794331] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f3eea0/0x1f43390) succeed. 00:13:38.269 [2024-07-26 20:34:26.803554] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f404e0/0x1f84a20) succeed. 00:13:38.528 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.529 [2024-07-26 20:34:26.926191] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.529 20:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.529 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.788 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:39.047 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:39.305 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:39.563 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:39.563 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:39.563 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:39.563 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:39.563 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:39.563 20:34:27 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:39.563 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:39.822 rmmod nvme_rdma 00:13:39.822 rmmod nvme_fabrics 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1043212 ']' 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1043212 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1043212 ']' 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1043212 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1043212 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1043212' 00:13:39.822 killing process with pid 1043212 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1043212 00:13:39.822 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1043212 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:40.081 00:13:40.081 real 0m11.106s 00:13:40.081 user 0m12.984s 00:13:40.081 sys 0m7.187s 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:40.081 ************************************ 00:13:40.081 END TEST nvmf_referrals 00:13:40.081 ************************************ 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.081 ************************************ 00:13:40.081 START TEST nvmf_connect_disconnect 00:13:40.081 ************************************ 00:13:40.081 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:40.341 * Looking for test storage... 00:13:40.341 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.341 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.342 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.342 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.342 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.342 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.342 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.342 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.342 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.342 20:34:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:48.463 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:48.463 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:48.463 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:48.463 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:48.463 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:48.464 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:48.464 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:48.464 altname enp217s0f0np0 00:13:48.464 altname ens818f0np0 00:13:48.464 inet 192.168.100.8/24 scope global mlx_0_0 00:13:48.464 valid_lft forever preferred_lft forever 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:48.464 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:48.464 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:48.464 altname enp217s0f1np1 00:13:48.464 altname ens818f1np1 00:13:48.464 inet 192.168.100.9/24 scope global mlx_0_1 00:13:48.464 valid_lft forever preferred_lft forever 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:48.464 192.168.100.9' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:48.464 192.168.100.9' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:48.464 192.168.100.9' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1047739 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1047739 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1047739 ']' 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.464 20:34:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:48.464 [2024-07-26 20:34:36.848169] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:13:48.464 [2024-07-26 20:34:36.848228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.464 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.464 [2024-07-26 20:34:36.932985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.464 [2024-07-26 20:34:36.971698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.464 [2024-07-26 20:34:36.971738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.464 [2024-07-26 20:34:36.971748] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.464 [2024-07-26 20:34:36.971757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.465 [2024-07-26 20:34:36.971764] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.465 [2024-07-26 20:34:36.971821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.465 [2024-07-26 20:34:36.971844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.465 [2024-07-26 20:34:36.971955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.465 [2024-07-26 20:34:36.971956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.403 [2024-07-26 20:34:37.719124] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:49.403 [2024-07-26 20:34:37.740675] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x64aea0/0x64f390) succeed. 00:13:49.403 [2024-07-26 20:34:37.750136] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x64c4e0/0x690a20) succeed. 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:49.403 [2024-07-26 20:34:37.889743] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:49.403 20:34:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:52.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:35.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:44.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:47.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:03.152 rmmod nvme_rdma 00:19:03.152 rmmod nvme_fabrics 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1047739 ']' 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1047739 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1047739 ']' 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1047739 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:19:03.152 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.153 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1047739 00:19:03.153 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:03.153 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:03.153 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1047739' 00:19:03.153 killing process with pid 1047739 00:19:03.153 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1047739 00:19:03.153 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1047739 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:03.413 00:19:03.413 real 5m23.127s 00:19:03.413 user 20m56.297s 00:19:03.413 sys 0m18.112s 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:19:03.413 ************************************ 00:19:03.413 END TEST nvmf_connect_disconnect 00:19:03.413 ************************************ 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.413 ************************************ 00:19:03.413 START TEST nvmf_multitarget 00:19:03.413 ************************************ 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:19:03.413 * Looking for test storage... 00:19:03.413 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.413 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:19:03.672 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.673 20:39:51 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:11.796 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:11.796 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:11.796 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:11.796 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:11.797 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:11.797 20:39:59 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:11.797 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:11.797 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:11.797 altname enp217s0f0np0 00:19:11.797 altname ens818f0np0 00:19:11.797 inet 192.168.100.8/24 scope global mlx_0_0 00:19:11.797 valid_lft forever preferred_lft forever 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:11.797 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:11.797 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:11.797 altname enp217s0f1np1 00:19:11.797 altname ens818f1np1 00:19:11.797 inet 192.168.100.9/24 scope global mlx_0_1 00:19:11.797 valid_lft forever preferred_lft forever 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:11.797 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:11.798 192.168.100.9' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:11.798 192.168.100.9' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:11.798 192.168.100.9' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1107421 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1107421 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1107421 ']' 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.798 20:40:00 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:11.798 [2024-07-26 20:40:00.253868] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:19:11.798 [2024-07-26 20:40:00.253919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.798 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.798 [2024-07-26 20:40:00.338173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.058 [2024-07-26 20:40:00.379848] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.058 [2024-07-26 20:40:00.379892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.058 [2024-07-26 20:40:00.379906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.058 [2024-07-26 20:40:00.379921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.058 [2024-07-26 20:40:00.379930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.058 [2024-07-26 20:40:00.379981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.058 [2024-07-26 20:40:00.380068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.058 [2024-07-26 20:40:00.380157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.058 [2024-07-26 20:40:00.380161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:12.628 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:19:12.887 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:19:12.887 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:19:12.887 "nvmf_tgt_1" 00:19:12.887 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:19:12.887 "nvmf_tgt_2" 00:19:12.887 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:12.887 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:19:13.147 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:19:13.147 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:19:13.147 true 00:19:13.147 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:19:13.405 true 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:13.405 rmmod nvme_rdma 00:19:13.405 rmmod nvme_fabrics 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1107421 ']' 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1107421 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1107421 ']' 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1107421 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1107421 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1107421' 00:19:13.405 killing process with pid 1107421 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1107421 00:19:13.405 20:40:01 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1107421 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:13.665 00:19:13.665 real 0m10.284s 00:19:13.665 user 0m9.984s 00:19:13.665 sys 0m6.799s 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 ************************************ 00:19:13.665 END TEST nvmf_multitarget 00:19:13.665 ************************************ 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 ************************************ 00:19:13.665 START TEST nvmf_rpc 00:19:13.665 ************************************ 00:19:13.665 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:19:13.925 * Looking for test storage... 00:19:13.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.925 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:19:13.926 20:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:23.919 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:23.919 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:23.919 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:23.920 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:23.920 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:23.920 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.920 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:23.920 altname enp217s0f0np0 00:19:23.920 altname ens818f0np0 00:19:23.920 inet 192.168.100.8/24 scope global mlx_0_0 00:19:23.920 valid_lft forever preferred_lft forever 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:23.920 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.920 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:23.920 altname enp217s0f1np1 00:19:23.920 altname ens818f1np1 00:19:23.920 inet 192.168.100.9/24 scope global mlx_0_1 00:19:23.920 valid_lft forever preferred_lft forever 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.920 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:23.921 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:19:23.921 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:23.921 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:23.921 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:23.921 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:23.921 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:23.921 20:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:23.921 192.168.100.9' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:23.921 192.168.100.9' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:23.921 192.168.100.9' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1111790 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1111790 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1111790 ']' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.921 [2024-07-26 20:40:11.118101] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:19:23.921 [2024-07-26 20:40:11.118151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.921 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.921 [2024-07-26 20:40:11.203977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.921 [2024-07-26 20:40:11.244211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.921 [2024-07-26 20:40:11.244252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.921 [2024-07-26 20:40:11.244267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.921 [2024-07-26 20:40:11.244278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.921 [2024-07-26 20:40:11.244288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.921 [2024-07-26 20:40:11.244347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.921 [2024-07-26 20:40:11.244441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.921 [2024-07-26 20:40:11.244530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.921 [2024-07-26 20:40:11.244535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:19:23.921 "tick_rate": 2500000000, 00:19:23.921 "poll_groups": [ 00:19:23.921 { 00:19:23.921 "name": "nvmf_tgt_poll_group_000", 00:19:23.921 "admin_qpairs": 0, 00:19:23.921 "io_qpairs": 0, 00:19:23.921 "current_admin_qpairs": 0, 00:19:23.921 "current_io_qpairs": 0, 00:19:23.921 "pending_bdev_io": 0, 00:19:23.921 "completed_nvme_io": 0, 00:19:23.921 "transports": [] 00:19:23.921 }, 00:19:23.921 { 00:19:23.921 "name": "nvmf_tgt_poll_group_001", 00:19:23.921 "admin_qpairs": 0, 00:19:23.921 "io_qpairs": 0, 00:19:23.921 "current_admin_qpairs": 0, 00:19:23.921 "current_io_qpairs": 0, 00:19:23.921 "pending_bdev_io": 0, 00:19:23.921 "completed_nvme_io": 0, 00:19:23.921 "transports": [] 00:19:23.921 }, 00:19:23.921 { 00:19:23.921 "name": "nvmf_tgt_poll_group_002", 00:19:23.921 "admin_qpairs": 0, 00:19:23.921 "io_qpairs": 0, 00:19:23.921 "current_admin_qpairs": 0, 00:19:23.921 "current_io_qpairs": 0, 00:19:23.921 "pending_bdev_io": 0, 00:19:23.921 "completed_nvme_io": 0, 00:19:23.921 "transports": [] 00:19:23.921 }, 00:19:23.921 { 00:19:23.921 "name": "nvmf_tgt_poll_group_003", 00:19:23.921 "admin_qpairs": 0, 00:19:23.921 "io_qpairs": 0, 00:19:23.921 "current_admin_qpairs": 0, 00:19:23.921 "current_io_qpairs": 0, 00:19:23.921 "pending_bdev_io": 0, 00:19:23.921 "completed_nvme_io": 0, 00:19:23.921 "transports": [] 00:19:23.921 } 00:19:23.921 ] 00:19:23.921 }' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:19:23.921 20:40:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.921 [2024-07-26 20:40:12.109817] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e72f10/0x1e77400) succeed. 00:19:23.921 [2024-07-26 20:40:12.119246] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e74550/0x1eb8a90) succeed. 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.921 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:19:23.922 "tick_rate": 2500000000, 00:19:23.922 "poll_groups": [ 00:19:23.922 { 00:19:23.922 "name": "nvmf_tgt_poll_group_000", 00:19:23.922 "admin_qpairs": 0, 00:19:23.922 "io_qpairs": 0, 00:19:23.922 "current_admin_qpairs": 0, 00:19:23.922 "current_io_qpairs": 0, 00:19:23.922 "pending_bdev_io": 0, 00:19:23.922 "completed_nvme_io": 0, 00:19:23.922 "transports": [ 00:19:23.922 { 00:19:23.922 "trtype": "RDMA", 00:19:23.922 "pending_data_buffer": 0, 00:19:23.922 "devices": [ 00:19:23.922 { 00:19:23.922 "name": "mlx5_0", 00:19:23.922 "polls": 15443, 00:19:23.922 "idle_polls": 15443, 00:19:23.922 "completions": 0, 00:19:23.922 "requests": 0, 00:19:23.922 "request_latency": 0, 00:19:23.922 "pending_free_request": 0, 00:19:23.922 "pending_rdma_read": 0, 00:19:23.922 "pending_rdma_write": 0, 00:19:23.922 "pending_rdma_send": 0, 00:19:23.922 "total_send_wrs": 0, 00:19:23.922 "send_doorbell_updates": 0, 00:19:23.922 "total_recv_wrs": 4096, 00:19:23.922 "recv_doorbell_updates": 1 00:19:23.922 }, 00:19:23.922 { 00:19:23.922 "name": "mlx5_1", 00:19:23.922 "polls": 15443, 00:19:23.922 "idle_polls": 15443, 00:19:23.922 "completions": 0, 00:19:23.922 "requests": 0, 00:19:23.922 "request_latency": 0, 00:19:23.922 "pending_free_request": 0, 00:19:23.922 "pending_rdma_read": 0, 00:19:23.922 "pending_rdma_write": 0, 00:19:23.922 "pending_rdma_send": 0, 00:19:23.922 "total_send_wrs": 0, 00:19:23.922 "send_doorbell_updates": 0, 00:19:23.922 "total_recv_wrs": 4096, 00:19:23.922 "recv_doorbell_updates": 1 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 }, 00:19:23.922 { 00:19:23.922 "name": "nvmf_tgt_poll_group_001", 00:19:23.922 "admin_qpairs": 0, 00:19:23.922 "io_qpairs": 0, 00:19:23.922 "current_admin_qpairs": 0, 00:19:23.922 "current_io_qpairs": 0, 00:19:23.922 "pending_bdev_io": 0, 00:19:23.922 "completed_nvme_io": 0, 00:19:23.922 "transports": [ 00:19:23.922 { 00:19:23.922 "trtype": "RDMA", 00:19:23.922 "pending_data_buffer": 0, 00:19:23.922 "devices": [ 00:19:23.922 { 00:19:23.922 "name": "mlx5_0", 00:19:23.922 "polls": 9572, 00:19:23.922 "idle_polls": 9572, 00:19:23.922 "completions": 0, 00:19:23.922 "requests": 0, 00:19:23.922 "request_latency": 0, 00:19:23.922 "pending_free_request": 0, 00:19:23.922 "pending_rdma_read": 0, 00:19:23.922 "pending_rdma_write": 0, 00:19:23.922 "pending_rdma_send": 0, 00:19:23.922 "total_send_wrs": 0, 00:19:23.922 "send_doorbell_updates": 0, 00:19:23.922 "total_recv_wrs": 4096, 00:19:23.922 "recv_doorbell_updates": 1 00:19:23.922 }, 00:19:23.922 { 00:19:23.922 "name": "mlx5_1", 00:19:23.922 "polls": 9572, 00:19:23.922 "idle_polls": 9572, 00:19:23.922 "completions": 0, 00:19:23.922 "requests": 0, 00:19:23.922 "request_latency": 0, 00:19:23.922 "pending_free_request": 0, 00:19:23.922 "pending_rdma_read": 0, 00:19:23.922 "pending_rdma_write": 0, 00:19:23.922 "pending_rdma_send": 0, 00:19:23.922 "total_send_wrs": 0, 00:19:23.922 "send_doorbell_updates": 0, 00:19:23.922 "total_recv_wrs": 4096, 00:19:23.922 "recv_doorbell_updates": 1 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 }, 00:19:23.922 { 00:19:23.922 "name": "nvmf_tgt_poll_group_002", 00:19:23.922 "admin_qpairs": 0, 00:19:23.922 "io_qpairs": 0, 00:19:23.922 "current_admin_qpairs": 0, 00:19:23.922 "current_io_qpairs": 0, 00:19:23.922 "pending_bdev_io": 0, 00:19:23.922 "completed_nvme_io": 0, 00:19:23.922 "transports": [ 00:19:23.922 { 00:19:23.922 "trtype": "RDMA", 00:19:23.922 "pending_data_buffer": 0, 00:19:23.922 "devices": [ 00:19:23.922 { 00:19:23.922 "name": "mlx5_0", 00:19:23.922 "polls": 5351, 00:19:23.922 "idle_polls": 5351, 00:19:23.922 "completions": 0, 00:19:23.922 "requests": 0, 00:19:23.922 "request_latency": 0, 00:19:23.922 "pending_free_request": 0, 00:19:23.922 "pending_rdma_read": 0, 00:19:23.922 "pending_rdma_write": 0, 00:19:23.922 "pending_rdma_send": 0, 00:19:23.922 "total_send_wrs": 0, 00:19:23.922 "send_doorbell_updates": 0, 00:19:23.922 "total_recv_wrs": 4096, 00:19:23.922 "recv_doorbell_updates": 1 00:19:23.922 }, 00:19:23.922 { 00:19:23.922 "name": "mlx5_1", 00:19:23.922 "polls": 5351, 00:19:23.922 "idle_polls": 5351, 00:19:23.922 "completions": 0, 00:19:23.922 "requests": 0, 00:19:23.922 "request_latency": 0, 00:19:23.922 "pending_free_request": 0, 00:19:23.922 "pending_rdma_read": 0, 00:19:23.922 "pending_rdma_write": 0, 00:19:23.922 "pending_rdma_send": 0, 00:19:23.922 "total_send_wrs": 0, 00:19:23.922 "send_doorbell_updates": 0, 00:19:23.922 "total_recv_wrs": 4096, 00:19:23.922 "recv_doorbell_updates": 1 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 }, 00:19:23.922 { 00:19:23.922 "name": "nvmf_tgt_poll_group_003", 00:19:23.922 "admin_qpairs": 0, 00:19:23.922 "io_qpairs": 0, 00:19:23.922 "current_admin_qpairs": 0, 00:19:23.922 "current_io_qpairs": 0, 00:19:23.922 "pending_bdev_io": 0, 00:19:23.922 "completed_nvme_io": 0, 00:19:23.922 "transports": [ 00:19:23.922 { 00:19:23.922 "trtype": "RDMA", 00:19:23.922 "pending_data_buffer": 0, 00:19:23.922 "devices": [ 00:19:23.922 { 00:19:23.922 "name": "mlx5_0", 00:19:23.922 "polls": 880, 00:19:23.922 "idle_polls": 880, 00:19:23.922 "completions": 0, 00:19:23.922 "requests": 0, 00:19:23.922 "request_latency": 0, 00:19:23.922 "pending_free_request": 0, 00:19:23.922 "pending_rdma_read": 0, 00:19:23.922 "pending_rdma_write": 0, 00:19:23.922 "pending_rdma_send": 0, 00:19:23.922 "total_send_wrs": 0, 00:19:23.922 "send_doorbell_updates": 0, 00:19:23.922 "total_recv_wrs": 4096, 00:19:23.922 "recv_doorbell_updates": 1 00:19:23.922 }, 00:19:23.922 { 00:19:23.922 "name": "mlx5_1", 00:19:23.922 "polls": 880, 00:19:23.922 "idle_polls": 880, 00:19:23.922 "completions": 0, 00:19:23.922 "requests": 0, 00:19:23.922 "request_latency": 0, 00:19:23.922 "pending_free_request": 0, 00:19:23.922 "pending_rdma_read": 0, 00:19:23.922 "pending_rdma_write": 0, 00:19:23.922 "pending_rdma_send": 0, 00:19:23.922 "total_send_wrs": 0, 00:19:23.922 "send_doorbell_updates": 0, 00:19:23.922 "total_recv_wrs": 4096, 00:19:23.922 "recv_doorbell_updates": 1 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 } 00:19:23.922 ] 00:19:23.922 }' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:19:23.922 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:19:23.923 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:19:23.923 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.182 Malloc1 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:19:24.182 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.183 [2024-07-26 20:40:12.534679] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:19:24.183 [2024-07-26 20:40:12.586759] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:19:24.183 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:24.183 could not add new controller: failed to write to nvme-fabrics device 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.183 20:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:25.117 20:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:19:25.117 20:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:25.117 20:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.117 20:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:25.117 20:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:27.693 20:40:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:27.693 20:40:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:27.693 20:40:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:27.693 20:40:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:27.693 20:40:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:27.693 20:40:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:27.693 20:40:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:28.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:28.262 [2024-07-26 20:40:16.688607] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:19:28.262 Failed to write to /dev/nvme-fabrics: Input/output error 00:19:28.262 could not add new controller: failed to write to nvme-fabrics device 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.262 20:40:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:29.200 20:40:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:19:29.200 20:40:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.200 20:40:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:29.200 20:40:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:29.200 20:40:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:31.737 20:40:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:31.737 20:40:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:31.737 20:40:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:31.737 20:40:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:31.737 20:40:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:31.737 20:40:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:31.737 20:40:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:32.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.307 [2024-07-26 20:40:20.749657] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.307 20:40:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:33.246 20:40:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:33.246 20:40:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:33.246 20:40:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.246 20:40:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:33.246 20:40:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:35.783 20:40:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:35.783 20:40:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:35.783 20:40:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:35.783 20:40:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:35.783 20:40:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.783 20:40:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:35.783 20:40:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.351 [2024-07-26 20:40:24.774524] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.351 20:40:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:37.287 20:40:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:37.287 20:40:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:37.287 20:40:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:37.287 20:40:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:37.287 20:40:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:39.824 20:40:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:39.824 20:40:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:39.824 20:40:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:39.824 20:40:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:39.824 20:40:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:39.824 20:40:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:39.824 20:40:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:40.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.392 [2024-07-26 20:40:28.802106] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:40.392 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.393 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.393 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.393 20:40:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:41.330 20:40:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:41.330 20:40:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:41.330 20:40:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:41.330 20:40:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:41.330 20:40:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:43.862 20:40:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:43.862 20:40:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:43.863 20:40:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:43.863 20:40:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:43.863 20:40:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:43.863 20:40:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:43.863 20:40:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:44.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.431 [2024-07-26 20:40:32.841599] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.431 20:40:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:45.365 20:40:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:45.365 20:40:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:45.365 20:40:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:45.365 20:40:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:45.365 20:40:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:47.930 20:40:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:47.930 20:40:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:47.930 20:40:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:47.930 20:40:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:47.930 20:40:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:47.930 20:40:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:47.930 20:40:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:48.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:48.498 [2024-07-26 20:40:36.889324] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.498 20:40:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:49.433 20:40:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:49.433 20:40:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:49.434 20:40:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:49.434 20:40:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:49.434 20:40:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:51.338 20:40:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:51.338 20:40:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:51.338 20:40:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:51.597 20:40:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:51.597 20:40:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:51.597 20:40:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:51.597 20:40:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:52.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 [2024-07-26 20:40:40.932479] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 [2024-07-26 20:40:40.980614] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.536 20:40:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:52.536 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.537 [2024-07-26 20:40:41.032833] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.537 [2024-07-26 20:40:41.080987] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.537 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 [2024-07-26 20:40:41.129164] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.797 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.798 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:52.798 "tick_rate": 2500000000, 00:19:52.798 "poll_groups": [ 00:19:52.798 { 00:19:52.798 "name": "nvmf_tgt_poll_group_000", 00:19:52.798 "admin_qpairs": 2, 00:19:52.798 "io_qpairs": 27, 00:19:52.798 "current_admin_qpairs": 0, 00:19:52.798 "current_io_qpairs": 0, 00:19:52.798 "pending_bdev_io": 0, 00:19:52.798 "completed_nvme_io": 127, 00:19:52.798 "transports": [ 00:19:52.798 { 00:19:52.798 "trtype": "RDMA", 00:19:52.798 "pending_data_buffer": 0, 00:19:52.798 "devices": [ 00:19:52.798 { 00:19:52.798 "name": "mlx5_0", 00:19:52.798 "polls": 3515387, 00:19:52.798 "idle_polls": 3515065, 00:19:52.798 "completions": 361, 00:19:52.798 "requests": 180, 00:19:52.798 "request_latency": 35424958, 00:19:52.798 "pending_free_request": 0, 00:19:52.798 "pending_rdma_read": 0, 00:19:52.798 "pending_rdma_write": 0, 00:19:52.798 "pending_rdma_send": 0, 00:19:52.798 "total_send_wrs": 305, 00:19:52.798 "send_doorbell_updates": 159, 00:19:52.798 "total_recv_wrs": 4276, 00:19:52.798 "recv_doorbell_updates": 159 00:19:52.798 }, 00:19:52.798 { 00:19:52.798 "name": "mlx5_1", 00:19:52.798 "polls": 3515387, 00:19:52.798 "idle_polls": 3515387, 00:19:52.798 "completions": 0, 00:19:52.798 "requests": 0, 00:19:52.798 "request_latency": 0, 00:19:52.798 "pending_free_request": 0, 00:19:52.798 "pending_rdma_read": 0, 00:19:52.798 "pending_rdma_write": 0, 00:19:52.798 "pending_rdma_send": 0, 00:19:52.798 "total_send_wrs": 0, 00:19:52.798 "send_doorbell_updates": 0, 00:19:52.798 "total_recv_wrs": 4096, 00:19:52.798 "recv_doorbell_updates": 1 00:19:52.798 } 00:19:52.798 ] 00:19:52.798 } 00:19:52.798 ] 00:19:52.798 }, 00:19:52.798 { 00:19:52.798 "name": "nvmf_tgt_poll_group_001", 00:19:52.798 "admin_qpairs": 2, 00:19:52.798 "io_qpairs": 26, 00:19:52.798 "current_admin_qpairs": 0, 00:19:52.798 "current_io_qpairs": 0, 00:19:52.798 "pending_bdev_io": 0, 00:19:52.798 "completed_nvme_io": 126, 00:19:52.798 "transports": [ 00:19:52.798 { 00:19:52.798 "trtype": "RDMA", 00:19:52.798 "pending_data_buffer": 0, 00:19:52.798 "devices": [ 00:19:52.798 { 00:19:52.798 "name": "mlx5_0", 00:19:52.798 "polls": 3436268, 00:19:52.798 "idle_polls": 3435949, 00:19:52.798 "completions": 358, 00:19:52.798 "requests": 179, 00:19:52.798 "request_latency": 37029482, 00:19:52.798 "pending_free_request": 0, 00:19:52.798 "pending_rdma_read": 0, 00:19:52.798 "pending_rdma_write": 0, 00:19:52.798 "pending_rdma_send": 0, 00:19:52.798 "total_send_wrs": 304, 00:19:52.798 "send_doorbell_updates": 154, 00:19:52.798 "total_recv_wrs": 4275, 00:19:52.798 "recv_doorbell_updates": 155 00:19:52.798 }, 00:19:52.798 { 00:19:52.798 "name": "mlx5_1", 00:19:52.798 "polls": 3436268, 00:19:52.798 "idle_polls": 3436268, 00:19:52.798 "completions": 0, 00:19:52.798 "requests": 0, 00:19:52.798 "request_latency": 0, 00:19:52.798 "pending_free_request": 0, 00:19:52.798 "pending_rdma_read": 0, 00:19:52.798 "pending_rdma_write": 0, 00:19:52.798 "pending_rdma_send": 0, 00:19:52.798 "total_send_wrs": 0, 00:19:52.798 "send_doorbell_updates": 0, 00:19:52.798 "total_recv_wrs": 4096, 00:19:52.798 "recv_doorbell_updates": 1 00:19:52.798 } 00:19:52.798 ] 00:19:52.798 } 00:19:52.798 ] 00:19:52.798 }, 00:19:52.798 { 00:19:52.798 "name": "nvmf_tgt_poll_group_002", 00:19:52.798 "admin_qpairs": 1, 00:19:52.798 "io_qpairs": 26, 00:19:52.798 "current_admin_qpairs": 0, 00:19:52.798 "current_io_qpairs": 0, 00:19:52.798 "pending_bdev_io": 0, 00:19:52.798 "completed_nvme_io": 77, 00:19:52.798 "transports": [ 00:19:52.798 { 00:19:52.798 "trtype": "RDMA", 00:19:52.798 "pending_data_buffer": 0, 00:19:52.798 "devices": [ 00:19:52.798 { 00:19:52.798 "name": "mlx5_0", 00:19:52.798 "polls": 3531496, 00:19:52.798 "idle_polls": 3531306, 00:19:52.798 "completions": 209, 00:19:52.798 "requests": 104, 00:19:52.798 "request_latency": 20190268, 00:19:52.798 "pending_free_request": 0, 00:19:52.798 "pending_rdma_read": 0, 00:19:52.798 "pending_rdma_write": 0, 00:19:52.798 "pending_rdma_send": 0, 00:19:52.798 "total_send_wrs": 168, 00:19:52.798 "send_doorbell_updates": 93, 00:19:52.798 "total_recv_wrs": 4200, 00:19:52.798 "recv_doorbell_updates": 93 00:19:52.798 }, 00:19:52.798 { 00:19:52.798 "name": "mlx5_1", 00:19:52.798 "polls": 3531496, 00:19:52.798 "idle_polls": 3531496, 00:19:52.798 "completions": 0, 00:19:52.798 "requests": 0, 00:19:52.798 "request_latency": 0, 00:19:52.798 "pending_free_request": 0, 00:19:52.798 "pending_rdma_read": 0, 00:19:52.798 "pending_rdma_write": 0, 00:19:52.798 "pending_rdma_send": 0, 00:19:52.798 "total_send_wrs": 0, 00:19:52.798 "send_doorbell_updates": 0, 00:19:52.798 "total_recv_wrs": 4096, 00:19:52.798 "recv_doorbell_updates": 1 00:19:52.798 } 00:19:52.798 ] 00:19:52.798 } 00:19:52.798 ] 00:19:52.798 }, 00:19:52.798 { 00:19:52.798 "name": "nvmf_tgt_poll_group_003", 00:19:52.798 "admin_qpairs": 2, 00:19:52.798 "io_qpairs": 26, 00:19:52.798 "current_admin_qpairs": 0, 00:19:52.798 "current_io_qpairs": 0, 00:19:52.798 "pending_bdev_io": 0, 00:19:52.798 "completed_nvme_io": 125, 00:19:52.798 "transports": [ 00:19:52.798 { 00:19:52.798 "trtype": "RDMA", 00:19:52.798 "pending_data_buffer": 0, 00:19:52.798 "devices": [ 00:19:52.798 { 00:19:52.798 "name": "mlx5_0", 00:19:52.798 "polls": 2757114, 00:19:52.798 "idle_polls": 2756797, 00:19:52.798 "completions": 360, 00:19:52.798 "requests": 180, 00:19:52.798 "request_latency": 37649226, 00:19:52.798 "pending_free_request": 0, 00:19:52.798 "pending_rdma_read": 0, 00:19:52.798 "pending_rdma_write": 0, 00:19:52.798 "pending_rdma_send": 0, 00:19:52.798 "total_send_wrs": 305, 00:19:52.798 "send_doorbell_updates": 155, 00:19:52.798 "total_recv_wrs": 4276, 00:19:52.798 "recv_doorbell_updates": 156 00:19:52.798 }, 00:19:52.798 { 00:19:52.798 "name": "mlx5_1", 00:19:52.798 "polls": 2757114, 00:19:52.798 "idle_polls": 2757114, 00:19:52.798 "completions": 0, 00:19:52.798 "requests": 0, 00:19:52.799 "request_latency": 0, 00:19:52.799 "pending_free_request": 0, 00:19:52.799 "pending_rdma_read": 0, 00:19:52.799 "pending_rdma_write": 0, 00:19:52.799 "pending_rdma_send": 0, 00:19:52.799 "total_send_wrs": 0, 00:19:52.799 "send_doorbell_updates": 0, 00:19:52.799 "total_recv_wrs": 4096, 00:19:52.799 "recv_doorbell_updates": 1 00:19:52.799 } 00:19:52.799 ] 00:19:52.799 } 00:19:52.799 ] 00:19:52.799 } 00:19:52.799 ] 00:19:52.799 }' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:19:52.799 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 130293934 > 0 )) 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:53.059 rmmod nvme_rdma 00:19:53.059 rmmod nvme_fabrics 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1111790 ']' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1111790 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1111790 ']' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1111790 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1111790 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1111790' 00:19:53.059 killing process with pid 1111790 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1111790 00:19:53.059 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1111790 00:19:53.319 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.319 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:53.319 00:19:53.319 real 0m39.584s 00:19:53.319 user 2m4.409s 00:19:53.319 sys 0m8.417s 00:19:53.319 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:53.319 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.319 ************************************ 00:19:53.319 END TEST nvmf_rpc 00:19:53.319 ************************************ 00:19:53.319 20:40:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:19:53.319 20:40:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:53.319 20:40:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:53.319 20:40:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.579 ************************************ 00:19:53.579 START TEST nvmf_invalid 00:19:53.579 ************************************ 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:19:53.579 * Looking for test storage... 00:19:53.579 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.579 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.580 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.580 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.580 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:53.580 20:40:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:19:53.580 20:40:42 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:01.700 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:01.700 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:01.700 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:01.701 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:01.701 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:20:01.701 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:01.960 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:01.960 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:01.960 altname enp217s0f0np0 00:20:01.960 altname ens818f0np0 00:20:01.960 inet 192.168.100.8/24 scope global mlx_0_0 00:20:01.960 valid_lft forever preferred_lft forever 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:01.960 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:01.960 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:01.960 altname enp217s0f1np1 00:20:01.960 altname ens818f1np1 00:20:01.960 inet 192.168.100.9/24 scope global mlx_0_1 00:20:01.960 valid_lft forever preferred_lft forever 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:01.960 192.168.100.9' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:01.960 192.168.100.9' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:01.960 192.168.100.9' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1121159 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1121159 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1121159 ']' 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.960 20:40:50 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:01.960 [2024-07-26 20:40:50.454873] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:20:01.960 [2024-07-26 20:40:50.454923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.960 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.218 [2024-07-26 20:40:50.542325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.218 [2024-07-26 20:40:50.581615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.218 [2024-07-26 20:40:50.581661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.218 [2024-07-26 20:40:50.581675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.218 [2024-07-26 20:40:50.581685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.218 [2024-07-26 20:40:50.581694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.218 [2024-07-26 20:40:50.581753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.218 [2024-07-26 20:40:50.581849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.218 [2024-07-26 20:40:50.581933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.219 [2024-07-26 20:40:50.581937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.784 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.784 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:20:02.784 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.784 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:02.785 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:02.785 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.785 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:02.785 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9211 00:20:03.042 [2024-07-26 20:40:51.475588] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:20:03.042 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:20:03.042 { 00:20:03.042 "nqn": "nqn.2016-06.io.spdk:cnode9211", 00:20:03.042 "tgt_name": "foobar", 00:20:03.042 "method": "nvmf_create_subsystem", 00:20:03.042 "req_id": 1 00:20:03.042 } 00:20:03.042 Got JSON-RPC error response 00:20:03.042 response: 00:20:03.042 { 00:20:03.042 "code": -32603, 00:20:03.042 "message": "Unable to find target foobar" 00:20:03.042 }' 00:20:03.042 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:20:03.042 { 00:20:03.042 "nqn": "nqn.2016-06.io.spdk:cnode9211", 00:20:03.042 "tgt_name": "foobar", 00:20:03.042 "method": "nvmf_create_subsystem", 00:20:03.042 "req_id": 1 00:20:03.042 } 00:20:03.042 Got JSON-RPC error response 00:20:03.042 response: 00:20:03.042 { 00:20:03.042 "code": -32603, 00:20:03.042 "message": "Unable to find target foobar" 00:20:03.043 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:20:03.043 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:20:03.043 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8597 00:20:03.301 [2024-07-26 20:40:51.664259] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8597: invalid serial number 'SPDKISFASTANDAWESOME' 00:20:03.301 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:20:03.301 { 00:20:03.301 "nqn": "nqn.2016-06.io.spdk:cnode8597", 00:20:03.301 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:03.301 "method": "nvmf_create_subsystem", 00:20:03.301 "req_id": 1 00:20:03.301 } 00:20:03.301 Got JSON-RPC error response 00:20:03.301 response: 00:20:03.301 { 00:20:03.301 "code": -32602, 00:20:03.301 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:03.301 }' 00:20:03.301 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:20:03.301 { 00:20:03.301 "nqn": "nqn.2016-06.io.spdk:cnode8597", 00:20:03.301 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:03.301 "method": "nvmf_create_subsystem", 00:20:03.301 "req_id": 1 00:20:03.301 } 00:20:03.301 Got JSON-RPC error response 00:20:03.301 response: 00:20:03.301 { 00:20:03.301 "code": -32602, 00:20:03.301 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:03.301 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:03.301 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:20:03.301 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15780 00:20:03.301 [2024-07-26 20:40:51.852831] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15780: invalid model number 'SPDK_Controller' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:20:03.560 { 00:20:03.560 "nqn": "nqn.2016-06.io.spdk:cnode15780", 00:20:03.560 "model_number": "SPDK_Controller\u001f", 00:20:03.560 "method": "nvmf_create_subsystem", 00:20:03.560 "req_id": 1 00:20:03.560 } 00:20:03.560 Got JSON-RPC error response 00:20:03.560 response: 00:20:03.560 { 00:20:03.560 "code": -32602, 00:20:03.560 "message": "Invalid MN SPDK_Controller\u001f" 00:20:03.560 }' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:20:03.560 { 00:20:03.560 "nqn": "nqn.2016-06.io.spdk:cnode15780", 00:20:03.560 "model_number": "SPDK_Controller\u001f", 00:20:03.560 "method": "nvmf_create_subsystem", 00:20:03.560 "req_id": 1 00:20:03.560 } 00:20:03.560 Got JSON-RPC error response 00:20:03.560 response: 00:20:03.560 { 00:20:03.560 "code": -32602, 00:20:03.560 "message": "Invalid MN SPDK_Controller\u001f" 00:20:03.560 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.560 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:20:03.561 20:40:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''l|'\''\ET>+@q.s{nNL$F*1' 00:20:03.561 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ''\''l|'\''\ET>+@q.s{nNL$F*1' nqn.2016-06.io.spdk:cnode2995 00:20:03.820 [2024-07-26 20:40:52.206027] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2995: invalid serial number ''l|'\ET>+@q.s{nNL$F*1' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:20:03.820 { 00:20:03.820 "nqn": "nqn.2016-06.io.spdk:cnode2995", 00:20:03.820 "serial_number": "'\''l|'\''\\ET>+@q.s{nNL$F*1", 00:20:03.820 "method": "nvmf_create_subsystem", 00:20:03.820 "req_id": 1 00:20:03.820 } 00:20:03.820 Got JSON-RPC error response 00:20:03.820 response: 00:20:03.820 { 00:20:03.820 "code": -32602, 00:20:03.820 "message": "Invalid SN '\''l|'\''\\ET>+@q.s{nNL$F*1" 00:20:03.820 }' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:20:03.820 { 00:20:03.820 "nqn": "nqn.2016-06.io.spdk:cnode2995", 00:20:03.820 "serial_number": "'l|'\\ET>+@q.s{nNL$F*1", 00:20:03.820 "method": "nvmf_create_subsystem", 00:20:03.820 "req_id": 1 00:20:03.820 } 00:20:03.820 Got JSON-RPC error response 00:20:03.820 response: 00:20:03.820 { 00:20:03.820 "code": -32602, 00:20:03.820 "message": "Invalid SN 'l|'\\ET>+@q.s{nNL$F*1" 00:20:03.820 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:20:03.820 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.821 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:03.821 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:20:03.821 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:20:03.821 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:20:03.821 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:03.821 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.079 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:20:04.080 20:40:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fwiR>kkkkk /dev/null' 00:20:06.663 20:40:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.663 20:40:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.663 20:40:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.663 20:40:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.663 20:40:55 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:14.812 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:14.812 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:14.812 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:14.812 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.812 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:14.813 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:14.813 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:14.813 altname enp217s0f0np0 00:20:14.813 altname ens818f0np0 00:20:14.813 inet 192.168.100.8/24 scope global mlx_0_0 00:20:14.813 valid_lft forever preferred_lft forever 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:14.813 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:14.813 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:14.813 altname enp217s0f1np1 00:20:14.813 altname ens818f1np1 00:20:14.813 inet 192.168.100.9/24 scope global mlx_0_1 00:20:14.813 valid_lft forever preferred_lft forever 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:14.813 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:15.071 192.168.100.9' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:15.071 192.168.100.9' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:15.071 192.168.100.9' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.071 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1126142 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1126142 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1126142 ']' 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.072 20:41:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.072 [2024-07-26 20:41:03.493562] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:20:15.072 [2024-07-26 20:41:03.493612] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.072 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.072 [2024-07-26 20:41:03.578054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:15.072 [2024-07-26 20:41:03.617116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.072 [2024-07-26 20:41:03.617159] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.072 [2024-07-26 20:41:03.617169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.072 [2024-07-26 20:41:03.617177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.072 [2024-07-26 20:41:03.617185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.072 [2024-07-26 20:41:03.617292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.072 [2024-07-26 20:41:03.617381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.072 [2024-07-26 20:41:03.617383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.007 [2024-07-26 20:41:04.361785] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa3d520/0xa41a10) succeed. 00:20:16.007 [2024-07-26 20:41:04.371056] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa3eac0/0xa830a0) succeed. 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.007 [2024-07-26 20:41:04.490581] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.007 NULL1 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1126319 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.007 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.007 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.266 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.267 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.267 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.267 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:16.267 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:16.267 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:16.267 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.267 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.267 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.525 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.525 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:16.525 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.525 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.525 20:41:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.784 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.784 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:16.784 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.784 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.784 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.351 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.351 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:17.351 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.351 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.351 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.610 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.610 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:17.610 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.610 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.610 20:41:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.868 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.868 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:17.869 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.869 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.869 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.127 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.127 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:18.127 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.127 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.127 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.385 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.386 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:18.386 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.386 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.386 20:41:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.953 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.953 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:18.953 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.953 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.953 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.212 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.212 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:19.212 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.212 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.212 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.471 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.471 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:19.471 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.471 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.471 20:41:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.729 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.729 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:19.729 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.729 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.729 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.988 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.988 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:19.988 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.988 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.988 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:20.555 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.555 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:20.555 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.556 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.556 20:41:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:20.814 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.814 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:20.814 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.814 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.814 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:21.073 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.073 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:21.073 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.073 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.073 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:21.331 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.331 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:21.331 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.332 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.332 20:41:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:21.590 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.591 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:21.591 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.591 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.591 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.158 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.158 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:22.158 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.158 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.158 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.417 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.417 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:22.417 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.417 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.417 20:41:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.676 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.676 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:22.676 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.676 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.676 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.935 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.935 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:22.935 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.935 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.935 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:23.502 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.502 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:23.502 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.502 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.502 20:41:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:23.761 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.761 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:23.761 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.761 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.761 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.020 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.020 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:24.020 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.020 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.020 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.278 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.278 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:24.279 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.279 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.279 20:41:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.537 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.537 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:24.537 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.537 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.537 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:25.104 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.104 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:25.104 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.104 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.104 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:25.362 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.362 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:25.362 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.362 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.362 20:41:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:25.620 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.620 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:25.620 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.621 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.621 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:25.879 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.879 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:25.879 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:25.879 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.879 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.445 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.445 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:26.445 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:26.445 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.445 20:41:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.445 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1126319 00:20:26.704 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1126319) - No such process 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1126319 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:26.704 rmmod nvme_rdma 00:20:26.704 rmmod nvme_fabrics 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1126142 ']' 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1126142 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1126142 ']' 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1126142 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1126142 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1126142' 00:20:26.704 killing process with pid 1126142 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1126142 00:20:26.704 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1126142 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:26.963 00:20:26.963 real 0m20.467s 00:20:26.963 user 0m43.154s 00:20:26.963 sys 0m8.937s 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:26.963 ************************************ 00:20:26.963 END TEST nvmf_connect_stress 00:20:26.963 ************************************ 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.963 ************************************ 00:20:26.963 START TEST nvmf_fused_ordering 00:20:26.963 ************************************ 00:20:26.963 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:20:27.284 * Looking for test storage... 00:20:27.284 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:27.284 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.285 20:41:15 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.417 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:35.418 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:35.418 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:35.418 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:35.418 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:35.418 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:35.418 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:35.418 altname enp217s0f0np0 00:20:35.418 altname ens818f0np0 00:20:35.418 inet 192.168.100.8/24 scope global mlx_0_0 00:20:35.418 valid_lft forever preferred_lft forever 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:35.418 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:35.419 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:35.419 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:35.419 altname enp217s0f1np1 00:20:35.419 altname ens818f1np1 00:20:35.419 inet 192.168.100.9/24 scope global mlx_0_1 00:20:35.419 valid_lft forever preferred_lft forever 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:35.419 192.168.100.9' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:35.419 192.168.100.9' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:35.419 192.168.100.9' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1132103 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1132103 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1132103 ']' 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:35.419 20:41:23 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:35.419 [2024-07-26 20:41:23.851222] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:20:35.419 [2024-07-26 20:41:23.851272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.419 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.419 [2024-07-26 20:41:23.936976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.676 [2024-07-26 20:41:23.976296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.676 [2024-07-26 20:41:23.976332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.676 [2024-07-26 20:41:23.976342] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.676 [2024-07-26 20:41:23.976351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.676 [2024-07-26 20:41:23.976358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.676 [2024-07-26 20:41:23.976384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:36.242 [2024-07-26 20:41:24.719077] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbc2f40/0xbc7430) succeed. 00:20:36.242 [2024-07-26 20:41:24.728464] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbc4440/0xc08ac0) succeed. 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:36.242 [2024-07-26 20:41:24.788863] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.242 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:36.500 NULL1 00:20:36.500 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.501 20:41:24 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:36.501 [2024-07-26 20:41:24.844773] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:20:36.501 [2024-07-26 20:41:24.844821] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132376 ] 00:20:36.501 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.501 Attached to nqn.2016-06.io.spdk:cnode1 00:20:36.501 Namespace ID: 1 size: 1GB 00:20:36.501 fused_ordering(0) 00:20:36.501 fused_ordering(1) 00:20:36.501 fused_ordering(2) 00:20:36.501 fused_ordering(3) 00:20:36.501 fused_ordering(4) 00:20:36.501 fused_ordering(5) 00:20:36.501 fused_ordering(6) 00:20:36.501 fused_ordering(7) 00:20:36.501 fused_ordering(8) 00:20:36.501 fused_ordering(9) 00:20:36.501 fused_ordering(10) 00:20:36.501 fused_ordering(11) 00:20:36.501 fused_ordering(12) 00:20:36.501 fused_ordering(13) 00:20:36.501 fused_ordering(14) 00:20:36.501 fused_ordering(15) 00:20:36.501 fused_ordering(16) 00:20:36.501 fused_ordering(17) 00:20:36.501 fused_ordering(18) 00:20:36.501 fused_ordering(19) 00:20:36.501 fused_ordering(20) 00:20:36.501 fused_ordering(21) 00:20:36.501 fused_ordering(22) 00:20:36.501 fused_ordering(23) 00:20:36.501 fused_ordering(24) 00:20:36.501 fused_ordering(25) 00:20:36.501 fused_ordering(26) 00:20:36.501 fused_ordering(27) 00:20:36.501 fused_ordering(28) 00:20:36.501 fused_ordering(29) 00:20:36.501 fused_ordering(30) 00:20:36.501 fused_ordering(31) 00:20:36.501 fused_ordering(32) 00:20:36.501 fused_ordering(33) 00:20:36.501 fused_ordering(34) 00:20:36.501 fused_ordering(35) 00:20:36.501 fused_ordering(36) 00:20:36.501 fused_ordering(37) 00:20:36.501 fused_ordering(38) 00:20:36.501 fused_ordering(39) 00:20:36.501 fused_ordering(40) 00:20:36.501 fused_ordering(41) 00:20:36.501 fused_ordering(42) 00:20:36.501 fused_ordering(43) 00:20:36.501 fused_ordering(44) 00:20:36.501 fused_ordering(45) 00:20:36.501 fused_ordering(46) 00:20:36.501 fused_ordering(47) 00:20:36.501 fused_ordering(48) 00:20:36.501 fused_ordering(49) 00:20:36.501 fused_ordering(50) 00:20:36.501 fused_ordering(51) 00:20:36.501 fused_ordering(52) 00:20:36.501 fused_ordering(53) 00:20:36.501 fused_ordering(54) 00:20:36.501 fused_ordering(55) 00:20:36.501 fused_ordering(56) 00:20:36.501 fused_ordering(57) 00:20:36.501 fused_ordering(58) 00:20:36.501 fused_ordering(59) 00:20:36.501 fused_ordering(60) 00:20:36.501 fused_ordering(61) 00:20:36.501 fused_ordering(62) 00:20:36.501 fused_ordering(63) 00:20:36.501 fused_ordering(64) 00:20:36.501 fused_ordering(65) 00:20:36.501 fused_ordering(66) 00:20:36.501 fused_ordering(67) 00:20:36.501 fused_ordering(68) 00:20:36.501 fused_ordering(69) 00:20:36.501 fused_ordering(70) 00:20:36.501 fused_ordering(71) 00:20:36.501 fused_ordering(72) 00:20:36.501 fused_ordering(73) 00:20:36.501 fused_ordering(74) 00:20:36.501 fused_ordering(75) 00:20:36.501 fused_ordering(76) 00:20:36.501 fused_ordering(77) 00:20:36.501 fused_ordering(78) 00:20:36.501 fused_ordering(79) 00:20:36.501 fused_ordering(80) 00:20:36.501 fused_ordering(81) 00:20:36.501 fused_ordering(82) 00:20:36.501 fused_ordering(83) 00:20:36.501 fused_ordering(84) 00:20:36.501 fused_ordering(85) 00:20:36.501 fused_ordering(86) 00:20:36.501 fused_ordering(87) 00:20:36.501 fused_ordering(88) 00:20:36.501 fused_ordering(89) 00:20:36.501 fused_ordering(90) 00:20:36.501 fused_ordering(91) 00:20:36.501 fused_ordering(92) 00:20:36.501 fused_ordering(93) 00:20:36.501 fused_ordering(94) 00:20:36.501 fused_ordering(95) 00:20:36.501 fused_ordering(96) 00:20:36.501 fused_ordering(97) 00:20:36.501 fused_ordering(98) 00:20:36.501 fused_ordering(99) 00:20:36.501 fused_ordering(100) 00:20:36.501 fused_ordering(101) 00:20:36.501 fused_ordering(102) 00:20:36.501 fused_ordering(103) 00:20:36.501 fused_ordering(104) 00:20:36.501 fused_ordering(105) 00:20:36.501 fused_ordering(106) 00:20:36.501 fused_ordering(107) 00:20:36.501 fused_ordering(108) 00:20:36.501 fused_ordering(109) 00:20:36.501 fused_ordering(110) 00:20:36.501 fused_ordering(111) 00:20:36.501 fused_ordering(112) 00:20:36.501 fused_ordering(113) 00:20:36.501 fused_ordering(114) 00:20:36.501 fused_ordering(115) 00:20:36.501 fused_ordering(116) 00:20:36.501 fused_ordering(117) 00:20:36.501 fused_ordering(118) 00:20:36.501 fused_ordering(119) 00:20:36.501 fused_ordering(120) 00:20:36.501 fused_ordering(121) 00:20:36.501 fused_ordering(122) 00:20:36.501 fused_ordering(123) 00:20:36.501 fused_ordering(124) 00:20:36.501 fused_ordering(125) 00:20:36.501 fused_ordering(126) 00:20:36.501 fused_ordering(127) 00:20:36.501 fused_ordering(128) 00:20:36.501 fused_ordering(129) 00:20:36.501 fused_ordering(130) 00:20:36.501 fused_ordering(131) 00:20:36.501 fused_ordering(132) 00:20:36.501 fused_ordering(133) 00:20:36.501 fused_ordering(134) 00:20:36.501 fused_ordering(135) 00:20:36.501 fused_ordering(136) 00:20:36.501 fused_ordering(137) 00:20:36.501 fused_ordering(138) 00:20:36.501 fused_ordering(139) 00:20:36.501 fused_ordering(140) 00:20:36.501 fused_ordering(141) 00:20:36.501 fused_ordering(142) 00:20:36.501 fused_ordering(143) 00:20:36.501 fused_ordering(144) 00:20:36.501 fused_ordering(145) 00:20:36.501 fused_ordering(146) 00:20:36.501 fused_ordering(147) 00:20:36.501 fused_ordering(148) 00:20:36.501 fused_ordering(149) 00:20:36.501 fused_ordering(150) 00:20:36.501 fused_ordering(151) 00:20:36.501 fused_ordering(152) 00:20:36.501 fused_ordering(153) 00:20:36.501 fused_ordering(154) 00:20:36.501 fused_ordering(155) 00:20:36.501 fused_ordering(156) 00:20:36.501 fused_ordering(157) 00:20:36.501 fused_ordering(158) 00:20:36.501 fused_ordering(159) 00:20:36.501 fused_ordering(160) 00:20:36.501 fused_ordering(161) 00:20:36.501 fused_ordering(162) 00:20:36.501 fused_ordering(163) 00:20:36.501 fused_ordering(164) 00:20:36.501 fused_ordering(165) 00:20:36.501 fused_ordering(166) 00:20:36.501 fused_ordering(167) 00:20:36.501 fused_ordering(168) 00:20:36.501 fused_ordering(169) 00:20:36.501 fused_ordering(170) 00:20:36.501 fused_ordering(171) 00:20:36.501 fused_ordering(172) 00:20:36.501 fused_ordering(173) 00:20:36.501 fused_ordering(174) 00:20:36.501 fused_ordering(175) 00:20:36.501 fused_ordering(176) 00:20:36.501 fused_ordering(177) 00:20:36.501 fused_ordering(178) 00:20:36.501 fused_ordering(179) 00:20:36.501 fused_ordering(180) 00:20:36.501 fused_ordering(181) 00:20:36.501 fused_ordering(182) 00:20:36.501 fused_ordering(183) 00:20:36.501 fused_ordering(184) 00:20:36.501 fused_ordering(185) 00:20:36.501 fused_ordering(186) 00:20:36.501 fused_ordering(187) 00:20:36.501 fused_ordering(188) 00:20:36.501 fused_ordering(189) 00:20:36.501 fused_ordering(190) 00:20:36.501 fused_ordering(191) 00:20:36.501 fused_ordering(192) 00:20:36.501 fused_ordering(193) 00:20:36.501 fused_ordering(194) 00:20:36.501 fused_ordering(195) 00:20:36.501 fused_ordering(196) 00:20:36.501 fused_ordering(197) 00:20:36.501 fused_ordering(198) 00:20:36.501 fused_ordering(199) 00:20:36.501 fused_ordering(200) 00:20:36.501 fused_ordering(201) 00:20:36.501 fused_ordering(202) 00:20:36.501 fused_ordering(203) 00:20:36.501 fused_ordering(204) 00:20:36.501 fused_ordering(205) 00:20:36.760 fused_ordering(206) 00:20:36.760 fused_ordering(207) 00:20:36.760 fused_ordering(208) 00:20:36.760 fused_ordering(209) 00:20:36.760 fused_ordering(210) 00:20:36.760 fused_ordering(211) 00:20:36.760 fused_ordering(212) 00:20:36.760 fused_ordering(213) 00:20:36.760 fused_ordering(214) 00:20:36.760 fused_ordering(215) 00:20:36.760 fused_ordering(216) 00:20:36.760 fused_ordering(217) 00:20:36.760 fused_ordering(218) 00:20:36.760 fused_ordering(219) 00:20:36.760 fused_ordering(220) 00:20:36.760 fused_ordering(221) 00:20:36.760 fused_ordering(222) 00:20:36.760 fused_ordering(223) 00:20:36.760 fused_ordering(224) 00:20:36.760 fused_ordering(225) 00:20:36.760 fused_ordering(226) 00:20:36.760 fused_ordering(227) 00:20:36.760 fused_ordering(228) 00:20:36.760 fused_ordering(229) 00:20:36.760 fused_ordering(230) 00:20:36.760 fused_ordering(231) 00:20:36.760 fused_ordering(232) 00:20:36.760 fused_ordering(233) 00:20:36.760 fused_ordering(234) 00:20:36.760 fused_ordering(235) 00:20:36.760 fused_ordering(236) 00:20:36.760 fused_ordering(237) 00:20:36.760 fused_ordering(238) 00:20:36.760 fused_ordering(239) 00:20:36.760 fused_ordering(240) 00:20:36.760 fused_ordering(241) 00:20:36.760 fused_ordering(242) 00:20:36.760 fused_ordering(243) 00:20:36.760 fused_ordering(244) 00:20:36.760 fused_ordering(245) 00:20:36.760 fused_ordering(246) 00:20:36.760 fused_ordering(247) 00:20:36.760 fused_ordering(248) 00:20:36.760 fused_ordering(249) 00:20:36.760 fused_ordering(250) 00:20:36.760 fused_ordering(251) 00:20:36.760 fused_ordering(252) 00:20:36.760 fused_ordering(253) 00:20:36.760 fused_ordering(254) 00:20:36.760 fused_ordering(255) 00:20:36.760 fused_ordering(256) 00:20:36.760 fused_ordering(257) 00:20:36.760 fused_ordering(258) 00:20:36.760 fused_ordering(259) 00:20:36.760 fused_ordering(260) 00:20:36.760 fused_ordering(261) 00:20:36.760 fused_ordering(262) 00:20:36.760 fused_ordering(263) 00:20:36.760 fused_ordering(264) 00:20:36.760 fused_ordering(265) 00:20:36.760 fused_ordering(266) 00:20:36.760 fused_ordering(267) 00:20:36.760 fused_ordering(268) 00:20:36.760 fused_ordering(269) 00:20:36.760 fused_ordering(270) 00:20:36.760 fused_ordering(271) 00:20:36.760 fused_ordering(272) 00:20:36.760 fused_ordering(273) 00:20:36.760 fused_ordering(274) 00:20:36.760 fused_ordering(275) 00:20:36.760 fused_ordering(276) 00:20:36.760 fused_ordering(277) 00:20:36.760 fused_ordering(278) 00:20:36.760 fused_ordering(279) 00:20:36.760 fused_ordering(280) 00:20:36.760 fused_ordering(281) 00:20:36.760 fused_ordering(282) 00:20:36.760 fused_ordering(283) 00:20:36.760 fused_ordering(284) 00:20:36.760 fused_ordering(285) 00:20:36.760 fused_ordering(286) 00:20:36.760 fused_ordering(287) 00:20:36.760 fused_ordering(288) 00:20:36.760 fused_ordering(289) 00:20:36.760 fused_ordering(290) 00:20:36.760 fused_ordering(291) 00:20:36.760 fused_ordering(292) 00:20:36.760 fused_ordering(293) 00:20:36.760 fused_ordering(294) 00:20:36.760 fused_ordering(295) 00:20:36.760 fused_ordering(296) 00:20:36.760 fused_ordering(297) 00:20:36.760 fused_ordering(298) 00:20:36.760 fused_ordering(299) 00:20:36.760 fused_ordering(300) 00:20:36.760 fused_ordering(301) 00:20:36.760 fused_ordering(302) 00:20:36.760 fused_ordering(303) 00:20:36.760 fused_ordering(304) 00:20:36.760 fused_ordering(305) 00:20:36.760 fused_ordering(306) 00:20:36.760 fused_ordering(307) 00:20:36.760 fused_ordering(308) 00:20:36.760 fused_ordering(309) 00:20:36.760 fused_ordering(310) 00:20:36.760 fused_ordering(311) 00:20:36.760 fused_ordering(312) 00:20:36.760 fused_ordering(313) 00:20:36.760 fused_ordering(314) 00:20:36.760 fused_ordering(315) 00:20:36.760 fused_ordering(316) 00:20:36.760 fused_ordering(317) 00:20:36.760 fused_ordering(318) 00:20:36.760 fused_ordering(319) 00:20:36.760 fused_ordering(320) 00:20:36.760 fused_ordering(321) 00:20:36.760 fused_ordering(322) 00:20:36.760 fused_ordering(323) 00:20:36.760 fused_ordering(324) 00:20:36.760 fused_ordering(325) 00:20:36.760 fused_ordering(326) 00:20:36.760 fused_ordering(327) 00:20:36.760 fused_ordering(328) 00:20:36.760 fused_ordering(329) 00:20:36.760 fused_ordering(330) 00:20:36.760 fused_ordering(331) 00:20:36.760 fused_ordering(332) 00:20:36.760 fused_ordering(333) 00:20:36.760 fused_ordering(334) 00:20:36.760 fused_ordering(335) 00:20:36.760 fused_ordering(336) 00:20:36.760 fused_ordering(337) 00:20:36.760 fused_ordering(338) 00:20:36.760 fused_ordering(339) 00:20:36.760 fused_ordering(340) 00:20:36.760 fused_ordering(341) 00:20:36.760 fused_ordering(342) 00:20:36.760 fused_ordering(343) 00:20:36.760 fused_ordering(344) 00:20:36.760 fused_ordering(345) 00:20:36.760 fused_ordering(346) 00:20:36.760 fused_ordering(347) 00:20:36.760 fused_ordering(348) 00:20:36.760 fused_ordering(349) 00:20:36.760 fused_ordering(350) 00:20:36.760 fused_ordering(351) 00:20:36.761 fused_ordering(352) 00:20:36.761 fused_ordering(353) 00:20:36.761 fused_ordering(354) 00:20:36.761 fused_ordering(355) 00:20:36.761 fused_ordering(356) 00:20:36.761 fused_ordering(357) 00:20:36.761 fused_ordering(358) 00:20:36.761 fused_ordering(359) 00:20:36.761 fused_ordering(360) 00:20:36.761 fused_ordering(361) 00:20:36.761 fused_ordering(362) 00:20:36.761 fused_ordering(363) 00:20:36.761 fused_ordering(364) 00:20:36.761 fused_ordering(365) 00:20:36.761 fused_ordering(366) 00:20:36.761 fused_ordering(367) 00:20:36.761 fused_ordering(368) 00:20:36.761 fused_ordering(369) 00:20:36.761 fused_ordering(370) 00:20:36.761 fused_ordering(371) 00:20:36.761 fused_ordering(372) 00:20:36.761 fused_ordering(373) 00:20:36.761 fused_ordering(374) 00:20:36.761 fused_ordering(375) 00:20:36.761 fused_ordering(376) 00:20:36.761 fused_ordering(377) 00:20:36.761 fused_ordering(378) 00:20:36.761 fused_ordering(379) 00:20:36.761 fused_ordering(380) 00:20:36.761 fused_ordering(381) 00:20:36.761 fused_ordering(382) 00:20:36.761 fused_ordering(383) 00:20:36.761 fused_ordering(384) 00:20:36.761 fused_ordering(385) 00:20:36.761 fused_ordering(386) 00:20:36.761 fused_ordering(387) 00:20:36.761 fused_ordering(388) 00:20:36.761 fused_ordering(389) 00:20:36.761 fused_ordering(390) 00:20:36.761 fused_ordering(391) 00:20:36.761 fused_ordering(392) 00:20:36.761 fused_ordering(393) 00:20:36.761 fused_ordering(394) 00:20:36.761 fused_ordering(395) 00:20:36.761 fused_ordering(396) 00:20:36.761 fused_ordering(397) 00:20:36.761 fused_ordering(398) 00:20:36.761 fused_ordering(399) 00:20:36.761 fused_ordering(400) 00:20:36.761 fused_ordering(401) 00:20:36.761 fused_ordering(402) 00:20:36.761 fused_ordering(403) 00:20:36.761 fused_ordering(404) 00:20:36.761 fused_ordering(405) 00:20:36.761 fused_ordering(406) 00:20:36.761 fused_ordering(407) 00:20:36.761 fused_ordering(408) 00:20:36.761 fused_ordering(409) 00:20:36.761 fused_ordering(410) 00:20:36.761 fused_ordering(411) 00:20:36.761 fused_ordering(412) 00:20:36.761 fused_ordering(413) 00:20:36.761 fused_ordering(414) 00:20:36.761 fused_ordering(415) 00:20:36.761 fused_ordering(416) 00:20:36.761 fused_ordering(417) 00:20:36.761 fused_ordering(418) 00:20:36.761 fused_ordering(419) 00:20:36.761 fused_ordering(420) 00:20:36.761 fused_ordering(421) 00:20:36.761 fused_ordering(422) 00:20:36.761 fused_ordering(423) 00:20:36.761 fused_ordering(424) 00:20:36.761 fused_ordering(425) 00:20:36.761 fused_ordering(426) 00:20:36.761 fused_ordering(427) 00:20:36.761 fused_ordering(428) 00:20:36.761 fused_ordering(429) 00:20:36.761 fused_ordering(430) 00:20:36.761 fused_ordering(431) 00:20:36.761 fused_ordering(432) 00:20:36.761 fused_ordering(433) 00:20:36.761 fused_ordering(434) 00:20:36.761 fused_ordering(435) 00:20:36.761 fused_ordering(436) 00:20:36.761 fused_ordering(437) 00:20:36.761 fused_ordering(438) 00:20:36.761 fused_ordering(439) 00:20:36.761 fused_ordering(440) 00:20:36.761 fused_ordering(441) 00:20:36.761 fused_ordering(442) 00:20:36.761 fused_ordering(443) 00:20:36.761 fused_ordering(444) 00:20:36.761 fused_ordering(445) 00:20:36.761 fused_ordering(446) 00:20:36.761 fused_ordering(447) 00:20:36.761 fused_ordering(448) 00:20:36.761 fused_ordering(449) 00:20:36.761 fused_ordering(450) 00:20:36.761 fused_ordering(451) 00:20:36.761 fused_ordering(452) 00:20:36.761 fused_ordering(453) 00:20:36.761 fused_ordering(454) 00:20:36.761 fused_ordering(455) 00:20:36.761 fused_ordering(456) 00:20:36.761 fused_ordering(457) 00:20:36.761 fused_ordering(458) 00:20:36.761 fused_ordering(459) 00:20:36.761 fused_ordering(460) 00:20:36.761 fused_ordering(461) 00:20:36.761 fused_ordering(462) 00:20:36.761 fused_ordering(463) 00:20:36.761 fused_ordering(464) 00:20:36.761 fused_ordering(465) 00:20:36.761 fused_ordering(466) 00:20:36.761 fused_ordering(467) 00:20:36.761 fused_ordering(468) 00:20:36.761 fused_ordering(469) 00:20:36.761 fused_ordering(470) 00:20:36.761 fused_ordering(471) 00:20:36.761 fused_ordering(472) 00:20:36.761 fused_ordering(473) 00:20:36.761 fused_ordering(474) 00:20:36.761 fused_ordering(475) 00:20:36.761 fused_ordering(476) 00:20:36.761 fused_ordering(477) 00:20:36.761 fused_ordering(478) 00:20:36.761 fused_ordering(479) 00:20:36.761 fused_ordering(480) 00:20:36.761 fused_ordering(481) 00:20:36.761 fused_ordering(482) 00:20:36.761 fused_ordering(483) 00:20:36.761 fused_ordering(484) 00:20:36.761 fused_ordering(485) 00:20:36.761 fused_ordering(486) 00:20:36.761 fused_ordering(487) 00:20:36.761 fused_ordering(488) 00:20:36.761 fused_ordering(489) 00:20:36.761 fused_ordering(490) 00:20:36.761 fused_ordering(491) 00:20:36.761 fused_ordering(492) 00:20:36.761 fused_ordering(493) 00:20:36.761 fused_ordering(494) 00:20:36.761 fused_ordering(495) 00:20:36.761 fused_ordering(496) 00:20:36.761 fused_ordering(497) 00:20:36.761 fused_ordering(498) 00:20:36.761 fused_ordering(499) 00:20:36.761 fused_ordering(500) 00:20:36.761 fused_ordering(501) 00:20:36.761 fused_ordering(502) 00:20:36.761 fused_ordering(503) 00:20:36.761 fused_ordering(504) 00:20:36.761 fused_ordering(505) 00:20:36.761 fused_ordering(506) 00:20:36.761 fused_ordering(507) 00:20:36.761 fused_ordering(508) 00:20:36.761 fused_ordering(509) 00:20:36.761 fused_ordering(510) 00:20:36.761 fused_ordering(511) 00:20:36.761 fused_ordering(512) 00:20:36.761 fused_ordering(513) 00:20:36.761 fused_ordering(514) 00:20:36.761 fused_ordering(515) 00:20:36.761 fused_ordering(516) 00:20:36.761 fused_ordering(517) 00:20:36.761 fused_ordering(518) 00:20:36.761 fused_ordering(519) 00:20:36.761 fused_ordering(520) 00:20:36.761 fused_ordering(521) 00:20:36.761 fused_ordering(522) 00:20:36.761 fused_ordering(523) 00:20:36.761 fused_ordering(524) 00:20:36.761 fused_ordering(525) 00:20:36.761 fused_ordering(526) 00:20:36.761 fused_ordering(527) 00:20:36.761 fused_ordering(528) 00:20:36.761 fused_ordering(529) 00:20:36.761 fused_ordering(530) 00:20:36.761 fused_ordering(531) 00:20:36.761 fused_ordering(532) 00:20:36.761 fused_ordering(533) 00:20:36.761 fused_ordering(534) 00:20:36.761 fused_ordering(535) 00:20:36.761 fused_ordering(536) 00:20:36.761 fused_ordering(537) 00:20:36.761 fused_ordering(538) 00:20:36.761 fused_ordering(539) 00:20:36.761 fused_ordering(540) 00:20:36.761 fused_ordering(541) 00:20:36.761 fused_ordering(542) 00:20:36.761 fused_ordering(543) 00:20:36.761 fused_ordering(544) 00:20:36.761 fused_ordering(545) 00:20:36.761 fused_ordering(546) 00:20:36.761 fused_ordering(547) 00:20:36.761 fused_ordering(548) 00:20:36.761 fused_ordering(549) 00:20:36.761 fused_ordering(550) 00:20:36.761 fused_ordering(551) 00:20:36.761 fused_ordering(552) 00:20:36.761 fused_ordering(553) 00:20:36.761 fused_ordering(554) 00:20:36.761 fused_ordering(555) 00:20:36.761 fused_ordering(556) 00:20:36.761 fused_ordering(557) 00:20:36.761 fused_ordering(558) 00:20:36.761 fused_ordering(559) 00:20:36.761 fused_ordering(560) 00:20:36.761 fused_ordering(561) 00:20:36.761 fused_ordering(562) 00:20:36.761 fused_ordering(563) 00:20:36.761 fused_ordering(564) 00:20:36.761 fused_ordering(565) 00:20:36.761 fused_ordering(566) 00:20:36.761 fused_ordering(567) 00:20:36.761 fused_ordering(568) 00:20:36.761 fused_ordering(569) 00:20:36.761 fused_ordering(570) 00:20:36.761 fused_ordering(571) 00:20:36.761 fused_ordering(572) 00:20:36.761 fused_ordering(573) 00:20:36.761 fused_ordering(574) 00:20:36.761 fused_ordering(575) 00:20:36.761 fused_ordering(576) 00:20:36.761 fused_ordering(577) 00:20:36.761 fused_ordering(578) 00:20:36.761 fused_ordering(579) 00:20:36.762 fused_ordering(580) 00:20:36.762 fused_ordering(581) 00:20:36.762 fused_ordering(582) 00:20:36.762 fused_ordering(583) 00:20:36.762 fused_ordering(584) 00:20:36.762 fused_ordering(585) 00:20:36.762 fused_ordering(586) 00:20:36.762 fused_ordering(587) 00:20:36.762 fused_ordering(588) 00:20:36.762 fused_ordering(589) 00:20:36.762 fused_ordering(590) 00:20:36.762 fused_ordering(591) 00:20:36.762 fused_ordering(592) 00:20:36.762 fused_ordering(593) 00:20:36.762 fused_ordering(594) 00:20:36.762 fused_ordering(595) 00:20:36.762 fused_ordering(596) 00:20:36.762 fused_ordering(597) 00:20:36.762 fused_ordering(598) 00:20:36.762 fused_ordering(599) 00:20:36.762 fused_ordering(600) 00:20:36.762 fused_ordering(601) 00:20:36.762 fused_ordering(602) 00:20:36.762 fused_ordering(603) 00:20:36.762 fused_ordering(604) 00:20:36.762 fused_ordering(605) 00:20:36.762 fused_ordering(606) 00:20:36.762 fused_ordering(607) 00:20:36.762 fused_ordering(608) 00:20:36.762 fused_ordering(609) 00:20:36.762 fused_ordering(610) 00:20:36.762 fused_ordering(611) 00:20:36.762 fused_ordering(612) 00:20:36.762 fused_ordering(613) 00:20:36.762 fused_ordering(614) 00:20:36.762 fused_ordering(615) 00:20:37.021 fused_ordering(616) 00:20:37.021 fused_ordering(617) 00:20:37.021 fused_ordering(618) 00:20:37.021 fused_ordering(619) 00:20:37.021 fused_ordering(620) 00:20:37.021 fused_ordering(621) 00:20:37.021 fused_ordering(622) 00:20:37.021 fused_ordering(623) 00:20:37.021 fused_ordering(624) 00:20:37.021 fused_ordering(625) 00:20:37.021 fused_ordering(626) 00:20:37.021 fused_ordering(627) 00:20:37.021 fused_ordering(628) 00:20:37.021 fused_ordering(629) 00:20:37.021 fused_ordering(630) 00:20:37.021 fused_ordering(631) 00:20:37.021 fused_ordering(632) 00:20:37.021 fused_ordering(633) 00:20:37.021 fused_ordering(634) 00:20:37.021 fused_ordering(635) 00:20:37.021 fused_ordering(636) 00:20:37.021 fused_ordering(637) 00:20:37.021 fused_ordering(638) 00:20:37.021 fused_ordering(639) 00:20:37.021 fused_ordering(640) 00:20:37.021 fused_ordering(641) 00:20:37.021 fused_ordering(642) 00:20:37.021 fused_ordering(643) 00:20:37.021 fused_ordering(644) 00:20:37.021 fused_ordering(645) 00:20:37.021 fused_ordering(646) 00:20:37.021 fused_ordering(647) 00:20:37.021 fused_ordering(648) 00:20:37.021 fused_ordering(649) 00:20:37.021 fused_ordering(650) 00:20:37.021 fused_ordering(651) 00:20:37.021 fused_ordering(652) 00:20:37.021 fused_ordering(653) 00:20:37.021 fused_ordering(654) 00:20:37.021 fused_ordering(655) 00:20:37.021 fused_ordering(656) 00:20:37.021 fused_ordering(657) 00:20:37.021 fused_ordering(658) 00:20:37.021 fused_ordering(659) 00:20:37.021 fused_ordering(660) 00:20:37.021 fused_ordering(661) 00:20:37.021 fused_ordering(662) 00:20:37.021 fused_ordering(663) 00:20:37.021 fused_ordering(664) 00:20:37.021 fused_ordering(665) 00:20:37.021 fused_ordering(666) 00:20:37.021 fused_ordering(667) 00:20:37.021 fused_ordering(668) 00:20:37.021 fused_ordering(669) 00:20:37.021 fused_ordering(670) 00:20:37.021 fused_ordering(671) 00:20:37.021 fused_ordering(672) 00:20:37.021 fused_ordering(673) 00:20:37.021 fused_ordering(674) 00:20:37.021 fused_ordering(675) 00:20:37.021 fused_ordering(676) 00:20:37.021 fused_ordering(677) 00:20:37.021 fused_ordering(678) 00:20:37.021 fused_ordering(679) 00:20:37.021 fused_ordering(680) 00:20:37.021 fused_ordering(681) 00:20:37.021 fused_ordering(682) 00:20:37.021 fused_ordering(683) 00:20:37.021 fused_ordering(684) 00:20:37.021 fused_ordering(685) 00:20:37.021 fused_ordering(686) 00:20:37.021 fused_ordering(687) 00:20:37.021 fused_ordering(688) 00:20:37.021 fused_ordering(689) 00:20:37.021 fused_ordering(690) 00:20:37.021 fused_ordering(691) 00:20:37.021 fused_ordering(692) 00:20:37.021 fused_ordering(693) 00:20:37.021 fused_ordering(694) 00:20:37.021 fused_ordering(695) 00:20:37.021 fused_ordering(696) 00:20:37.021 fused_ordering(697) 00:20:37.021 fused_ordering(698) 00:20:37.021 fused_ordering(699) 00:20:37.021 fused_ordering(700) 00:20:37.021 fused_ordering(701) 00:20:37.021 fused_ordering(702) 00:20:37.021 fused_ordering(703) 00:20:37.021 fused_ordering(704) 00:20:37.021 fused_ordering(705) 00:20:37.021 fused_ordering(706) 00:20:37.021 fused_ordering(707) 00:20:37.021 fused_ordering(708) 00:20:37.021 fused_ordering(709) 00:20:37.021 fused_ordering(710) 00:20:37.021 fused_ordering(711) 00:20:37.021 fused_ordering(712) 00:20:37.021 fused_ordering(713) 00:20:37.021 fused_ordering(714) 00:20:37.021 fused_ordering(715) 00:20:37.021 fused_ordering(716) 00:20:37.021 fused_ordering(717) 00:20:37.021 fused_ordering(718) 00:20:37.021 fused_ordering(719) 00:20:37.021 fused_ordering(720) 00:20:37.021 fused_ordering(721) 00:20:37.021 fused_ordering(722) 00:20:37.021 fused_ordering(723) 00:20:37.021 fused_ordering(724) 00:20:37.021 fused_ordering(725) 00:20:37.021 fused_ordering(726) 00:20:37.021 fused_ordering(727) 00:20:37.021 fused_ordering(728) 00:20:37.021 fused_ordering(729) 00:20:37.021 fused_ordering(730) 00:20:37.021 fused_ordering(731) 00:20:37.021 fused_ordering(732) 00:20:37.021 fused_ordering(733) 00:20:37.021 fused_ordering(734) 00:20:37.021 fused_ordering(735) 00:20:37.021 fused_ordering(736) 00:20:37.021 fused_ordering(737) 00:20:37.021 fused_ordering(738) 00:20:37.021 fused_ordering(739) 00:20:37.021 fused_ordering(740) 00:20:37.021 fused_ordering(741) 00:20:37.021 fused_ordering(742) 00:20:37.021 fused_ordering(743) 00:20:37.021 fused_ordering(744) 00:20:37.021 fused_ordering(745) 00:20:37.021 fused_ordering(746) 00:20:37.021 fused_ordering(747) 00:20:37.021 fused_ordering(748) 00:20:37.021 fused_ordering(749) 00:20:37.021 fused_ordering(750) 00:20:37.021 fused_ordering(751) 00:20:37.021 fused_ordering(752) 00:20:37.021 fused_ordering(753) 00:20:37.021 fused_ordering(754) 00:20:37.021 fused_ordering(755) 00:20:37.021 fused_ordering(756) 00:20:37.021 fused_ordering(757) 00:20:37.021 fused_ordering(758) 00:20:37.021 fused_ordering(759) 00:20:37.021 fused_ordering(760) 00:20:37.021 fused_ordering(761) 00:20:37.021 fused_ordering(762) 00:20:37.021 fused_ordering(763) 00:20:37.021 fused_ordering(764) 00:20:37.021 fused_ordering(765) 00:20:37.021 fused_ordering(766) 00:20:37.022 fused_ordering(767) 00:20:37.022 fused_ordering(768) 00:20:37.022 fused_ordering(769) 00:20:37.022 fused_ordering(770) 00:20:37.022 fused_ordering(771) 00:20:37.022 fused_ordering(772) 00:20:37.022 fused_ordering(773) 00:20:37.022 fused_ordering(774) 00:20:37.022 fused_ordering(775) 00:20:37.022 fused_ordering(776) 00:20:37.022 fused_ordering(777) 00:20:37.022 fused_ordering(778) 00:20:37.022 fused_ordering(779) 00:20:37.022 fused_ordering(780) 00:20:37.022 fused_ordering(781) 00:20:37.022 fused_ordering(782) 00:20:37.022 fused_ordering(783) 00:20:37.022 fused_ordering(784) 00:20:37.022 fused_ordering(785) 00:20:37.022 fused_ordering(786) 00:20:37.022 fused_ordering(787) 00:20:37.022 fused_ordering(788) 00:20:37.022 fused_ordering(789) 00:20:37.022 fused_ordering(790) 00:20:37.022 fused_ordering(791) 00:20:37.022 fused_ordering(792) 00:20:37.022 fused_ordering(793) 00:20:37.022 fused_ordering(794) 00:20:37.022 fused_ordering(795) 00:20:37.022 fused_ordering(796) 00:20:37.022 fused_ordering(797) 00:20:37.022 fused_ordering(798) 00:20:37.022 fused_ordering(799) 00:20:37.022 fused_ordering(800) 00:20:37.022 fused_ordering(801) 00:20:37.022 fused_ordering(802) 00:20:37.022 fused_ordering(803) 00:20:37.022 fused_ordering(804) 00:20:37.022 fused_ordering(805) 00:20:37.022 fused_ordering(806) 00:20:37.022 fused_ordering(807) 00:20:37.022 fused_ordering(808) 00:20:37.022 fused_ordering(809) 00:20:37.022 fused_ordering(810) 00:20:37.022 fused_ordering(811) 00:20:37.022 fused_ordering(812) 00:20:37.022 fused_ordering(813) 00:20:37.022 fused_ordering(814) 00:20:37.022 fused_ordering(815) 00:20:37.022 fused_ordering(816) 00:20:37.022 fused_ordering(817) 00:20:37.022 fused_ordering(818) 00:20:37.022 fused_ordering(819) 00:20:37.022 fused_ordering(820) 00:20:37.022 fused_ordering(821) 00:20:37.022 fused_ordering(822) 00:20:37.022 fused_ordering(823) 00:20:37.022 fused_ordering(824) 00:20:37.022 fused_ordering(825) 00:20:37.022 fused_ordering(826) 00:20:37.022 fused_ordering(827) 00:20:37.022 fused_ordering(828) 00:20:37.022 fused_ordering(829) 00:20:37.022 fused_ordering(830) 00:20:37.022 fused_ordering(831) 00:20:37.022 fused_ordering(832) 00:20:37.022 fused_ordering(833) 00:20:37.022 fused_ordering(834) 00:20:37.022 fused_ordering(835) 00:20:37.022 fused_ordering(836) 00:20:37.022 fused_ordering(837) 00:20:37.022 fused_ordering(838) 00:20:37.022 fused_ordering(839) 00:20:37.022 fused_ordering(840) 00:20:37.022 fused_ordering(841) 00:20:37.022 fused_ordering(842) 00:20:37.022 fused_ordering(843) 00:20:37.022 fused_ordering(844) 00:20:37.022 fused_ordering(845) 00:20:37.022 fused_ordering(846) 00:20:37.022 fused_ordering(847) 00:20:37.022 fused_ordering(848) 00:20:37.022 fused_ordering(849) 00:20:37.022 fused_ordering(850) 00:20:37.022 fused_ordering(851) 00:20:37.022 fused_ordering(852) 00:20:37.022 fused_ordering(853) 00:20:37.022 fused_ordering(854) 00:20:37.022 fused_ordering(855) 00:20:37.022 fused_ordering(856) 00:20:37.022 fused_ordering(857) 00:20:37.022 fused_ordering(858) 00:20:37.022 fused_ordering(859) 00:20:37.022 fused_ordering(860) 00:20:37.022 fused_ordering(861) 00:20:37.022 fused_ordering(862) 00:20:37.022 fused_ordering(863) 00:20:37.022 fused_ordering(864) 00:20:37.022 fused_ordering(865) 00:20:37.022 fused_ordering(866) 00:20:37.022 fused_ordering(867) 00:20:37.022 fused_ordering(868) 00:20:37.022 fused_ordering(869) 00:20:37.022 fused_ordering(870) 00:20:37.022 fused_ordering(871) 00:20:37.022 fused_ordering(872) 00:20:37.022 fused_ordering(873) 00:20:37.022 fused_ordering(874) 00:20:37.022 fused_ordering(875) 00:20:37.022 fused_ordering(876) 00:20:37.022 fused_ordering(877) 00:20:37.022 fused_ordering(878) 00:20:37.022 fused_ordering(879) 00:20:37.022 fused_ordering(880) 00:20:37.022 fused_ordering(881) 00:20:37.022 fused_ordering(882) 00:20:37.022 fused_ordering(883) 00:20:37.022 fused_ordering(884) 00:20:37.022 fused_ordering(885) 00:20:37.022 fused_ordering(886) 00:20:37.022 fused_ordering(887) 00:20:37.022 fused_ordering(888) 00:20:37.022 fused_ordering(889) 00:20:37.022 fused_ordering(890) 00:20:37.022 fused_ordering(891) 00:20:37.022 fused_ordering(892) 00:20:37.022 fused_ordering(893) 00:20:37.022 fused_ordering(894) 00:20:37.022 fused_ordering(895) 00:20:37.022 fused_ordering(896) 00:20:37.022 fused_ordering(897) 00:20:37.022 fused_ordering(898) 00:20:37.022 fused_ordering(899) 00:20:37.022 fused_ordering(900) 00:20:37.022 fused_ordering(901) 00:20:37.022 fused_ordering(902) 00:20:37.022 fused_ordering(903) 00:20:37.022 fused_ordering(904) 00:20:37.022 fused_ordering(905) 00:20:37.022 fused_ordering(906) 00:20:37.022 fused_ordering(907) 00:20:37.022 fused_ordering(908) 00:20:37.022 fused_ordering(909) 00:20:37.022 fused_ordering(910) 00:20:37.022 fused_ordering(911) 00:20:37.022 fused_ordering(912) 00:20:37.022 fused_ordering(913) 00:20:37.022 fused_ordering(914) 00:20:37.022 fused_ordering(915) 00:20:37.022 fused_ordering(916) 00:20:37.022 fused_ordering(917) 00:20:37.022 fused_ordering(918) 00:20:37.022 fused_ordering(919) 00:20:37.022 fused_ordering(920) 00:20:37.022 fused_ordering(921) 00:20:37.022 fused_ordering(922) 00:20:37.022 fused_ordering(923) 00:20:37.022 fused_ordering(924) 00:20:37.022 fused_ordering(925) 00:20:37.022 fused_ordering(926) 00:20:37.022 fused_ordering(927) 00:20:37.022 fused_ordering(928) 00:20:37.022 fused_ordering(929) 00:20:37.022 fused_ordering(930) 00:20:37.022 fused_ordering(931) 00:20:37.022 fused_ordering(932) 00:20:37.022 fused_ordering(933) 00:20:37.022 fused_ordering(934) 00:20:37.022 fused_ordering(935) 00:20:37.022 fused_ordering(936) 00:20:37.022 fused_ordering(937) 00:20:37.022 fused_ordering(938) 00:20:37.022 fused_ordering(939) 00:20:37.022 fused_ordering(940) 00:20:37.022 fused_ordering(941) 00:20:37.022 fused_ordering(942) 00:20:37.022 fused_ordering(943) 00:20:37.022 fused_ordering(944) 00:20:37.022 fused_ordering(945) 00:20:37.022 fused_ordering(946) 00:20:37.022 fused_ordering(947) 00:20:37.022 fused_ordering(948) 00:20:37.022 fused_ordering(949) 00:20:37.022 fused_ordering(950) 00:20:37.022 fused_ordering(951) 00:20:37.022 fused_ordering(952) 00:20:37.022 fused_ordering(953) 00:20:37.022 fused_ordering(954) 00:20:37.022 fused_ordering(955) 00:20:37.022 fused_ordering(956) 00:20:37.022 fused_ordering(957) 00:20:37.022 fused_ordering(958) 00:20:37.022 fused_ordering(959) 00:20:37.022 fused_ordering(960) 00:20:37.022 fused_ordering(961) 00:20:37.022 fused_ordering(962) 00:20:37.022 fused_ordering(963) 00:20:37.022 fused_ordering(964) 00:20:37.022 fused_ordering(965) 00:20:37.022 fused_ordering(966) 00:20:37.022 fused_ordering(967) 00:20:37.022 fused_ordering(968) 00:20:37.022 fused_ordering(969) 00:20:37.022 fused_ordering(970) 00:20:37.022 fused_ordering(971) 00:20:37.022 fused_ordering(972) 00:20:37.022 fused_ordering(973) 00:20:37.022 fused_ordering(974) 00:20:37.022 fused_ordering(975) 00:20:37.022 fused_ordering(976) 00:20:37.022 fused_ordering(977) 00:20:37.022 fused_ordering(978) 00:20:37.022 fused_ordering(979) 00:20:37.022 fused_ordering(980) 00:20:37.022 fused_ordering(981) 00:20:37.022 fused_ordering(982) 00:20:37.022 fused_ordering(983) 00:20:37.022 fused_ordering(984) 00:20:37.022 fused_ordering(985) 00:20:37.022 fused_ordering(986) 00:20:37.022 fused_ordering(987) 00:20:37.022 fused_ordering(988) 00:20:37.022 fused_ordering(989) 00:20:37.022 fused_ordering(990) 00:20:37.022 fused_ordering(991) 00:20:37.022 fused_ordering(992) 00:20:37.022 fused_ordering(993) 00:20:37.022 fused_ordering(994) 00:20:37.022 fused_ordering(995) 00:20:37.022 fused_ordering(996) 00:20:37.022 fused_ordering(997) 00:20:37.022 fused_ordering(998) 00:20:37.022 fused_ordering(999) 00:20:37.022 fused_ordering(1000) 00:20:37.022 fused_ordering(1001) 00:20:37.022 fused_ordering(1002) 00:20:37.022 fused_ordering(1003) 00:20:37.022 fused_ordering(1004) 00:20:37.022 fused_ordering(1005) 00:20:37.022 fused_ordering(1006) 00:20:37.022 fused_ordering(1007) 00:20:37.022 fused_ordering(1008) 00:20:37.022 fused_ordering(1009) 00:20:37.022 fused_ordering(1010) 00:20:37.022 fused_ordering(1011) 00:20:37.022 fused_ordering(1012) 00:20:37.022 fused_ordering(1013) 00:20:37.022 fused_ordering(1014) 00:20:37.022 fused_ordering(1015) 00:20:37.022 fused_ordering(1016) 00:20:37.022 fused_ordering(1017) 00:20:37.022 fused_ordering(1018) 00:20:37.022 fused_ordering(1019) 00:20:37.022 fused_ordering(1020) 00:20:37.022 fused_ordering(1021) 00:20:37.022 fused_ordering(1022) 00:20:37.022 fused_ordering(1023) 00:20:37.022 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:37.022 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:37.022 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:37.022 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:20:37.023 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:37.023 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:37.023 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:20:37.023 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:37.023 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:37.023 rmmod nvme_rdma 00:20:37.023 rmmod nvme_fabrics 00:20:37.023 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1132103 ']' 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1132103 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1132103 ']' 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1132103 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132103 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132103' 00:20:37.282 killing process with pid 1132103 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1132103 00:20:37.282 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1132103 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:37.541 00:20:37.541 real 0m10.361s 00:20:37.541 user 0m5.043s 00:20:37.541 sys 0m6.689s 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:37.541 ************************************ 00:20:37.541 END TEST nvmf_fused_ordering 00:20:37.541 ************************************ 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:37.541 ************************************ 00:20:37.541 START TEST nvmf_ns_masking 00:20:37.541 ************************************ 00:20:37.541 20:41:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:20:37.541 * Looking for test storage... 00:20:37.541 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.541 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3371842b-a04e-423a-87a4-414d11fcfa56 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f3835ef7-098b-49c8-9dfe-aab89edf786b 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9224e3e3-3306-4335-a5be-35292336c25f 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.542 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.801 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.801 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.801 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.801 20:41:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:47.781 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:47.781 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:47.781 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:47.781 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.781 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:47.782 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:47.782 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:47.782 altname enp217s0f0np0 00:20:47.782 altname ens818f0np0 00:20:47.782 inet 192.168.100.8/24 scope global mlx_0_0 00:20:47.782 valid_lft forever preferred_lft forever 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:47.782 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:47.782 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:47.782 altname enp217s0f1np1 00:20:47.782 altname ens818f1np1 00:20:47.782 inet 192.168.100.9/24 scope global mlx_0_1 00:20:47.782 valid_lft forever preferred_lft forever 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:20:47.782 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:47.783 192.168.100.9' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:47.783 192.168.100.9' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:47.783 192.168.100.9' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1136563 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1136563 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1136563 ']' 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:47.783 20:41:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:47.783 [2024-07-26 20:41:34.861250] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:20:47.783 [2024-07-26 20:41:34.861303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.783 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.783 [2024-07-26 20:41:34.947721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.783 [2024-07-26 20:41:34.988028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.783 [2024-07-26 20:41:34.988067] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.783 [2024-07-26 20:41:34.988081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.783 [2024-07-26 20:41:34.988092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.783 [2024-07-26 20:41:34.988101] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.783 [2024-07-26 20:41:34.988134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:47.783 [2024-07-26 20:41:35.876417] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc41c80/0xc46170) succeed. 00:20:47.783 [2024-07-26 20:41:35.886022] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc43180/0xc87800) succeed. 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:47.783 20:41:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:47.783 Malloc1 00:20:47.783 20:41:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:47.783 Malloc2 00:20:47.783 20:41:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:48.042 20:41:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:48.301 20:41:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:48.301 [2024-07-26 20:41:36.823992] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:48.301 20:41:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:20:48.301 20:41:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9224e3e3-3306-4335-a5be-35292336c25f -a 192.168.100.8 -s 4420 -i 4 00:20:48.868 20:41:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:20:48.868 20:41:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:48.868 20:41:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:48.868 20:41:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:48.868 20:41:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:50.773 [ 0]:0x1 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efb4eaa08b724b1b9747819834da24d2 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efb4eaa08b724b1b9747819834da24d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:50.773 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:51.031 [ 0]:0x1 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efb4eaa08b724b1b9747819834da24d2 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efb4eaa08b724b1b9747819834da24d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:51.031 [ 1]:0x2 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b8519cb19874021be4bcc67640b0e03 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b8519cb19874021be4bcc67640b0e03 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:20:51.031 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:51.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:51.598 20:41:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:51.598 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:51.856 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:20:51.856 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9224e3e3-3306-4335-a5be-35292336c25f -a 192.168.100.8 -s 4420 -i 4 00:20:52.114 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:52.115 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:52.115 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:52.115 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:20:52.115 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:20:52.115 20:41:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:54.017 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:54.017 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:54.017 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:54.275 [ 0]:0x2 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b8519cb19874021be4bcc67640b0e03 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b8519cb19874021be4bcc67640b0e03 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:54.275 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:54.569 [ 0]:0x1 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efb4eaa08b724b1b9747819834da24d2 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efb4eaa08b724b1b9747819834da24d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:54.569 [ 1]:0x2 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:54.569 20:41:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:54.569 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b8519cb19874021be4bcc67640b0e03 00:20:54.569 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b8519cb19874021be4bcc67640b0e03 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:54.569 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:54.835 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:20:54.835 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:54.835 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:54.835 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:54.835 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.835 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:54.835 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.835 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:54.836 [ 0]:0x2 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b8519cb19874021be4bcc67640b0e03 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b8519cb19874021be4bcc67640b0e03 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:20:54.836 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:55.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:55.098 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:55.356 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:20:55.356 20:41:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9224e3e3-3306-4335-a5be-35292336c25f -a 192.168.100.8 -s 4420 -i 4 00:20:55.614 20:41:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:55.614 20:41:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:55.614 20:41:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:55.614 20:41:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:20:55.614 20:41:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:20:55.614 20:41:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:58.142 [ 0]:0x1 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:58.142 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efb4eaa08b724b1b9747819834da24d2 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efb4eaa08b724b1b9747819834da24d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:58.143 [ 1]:0x2 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b8519cb19874021be4bcc67640b0e03 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b8519cb19874021be4bcc67640b0e03 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:58.143 [ 0]:0x2 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b8519cb19874021be4bcc67640b0e03 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b8519cb19874021be4bcc67640b0e03 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:58.143 [2024-07-26 20:41:46.673311] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:58.143 request: 00:20:58.143 { 00:20:58.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.143 "nsid": 2, 00:20:58.143 "host": "nqn.2016-06.io.spdk:host1", 00:20:58.143 "method": "nvmf_ns_remove_host", 00:20:58.143 "req_id": 1 00:20:58.143 } 00:20:58.143 Got JSON-RPC error response 00:20:58.143 response: 00:20:58.143 { 00:20:58.143 "code": -32602, 00:20:58.143 "message": "Invalid parameters" 00:20:58.143 } 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:20:58.143 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:58.401 [ 0]:0x2 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b8519cb19874021be4bcc67640b0e03 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b8519cb19874021be4bcc67640b0e03 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:58.401 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:20:58.402 20:41:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:58.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1138840 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1138840 /var/tmp/host.sock 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1138840 ']' 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:58.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.659 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:58.659 [2024-07-26 20:41:47.164366] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:20:58.659 [2024-07-26 20:41:47.164419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138840 ] 00:20:58.659 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.917 [2024-07-26 20:41:47.250395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.917 [2024-07-26 20:41:47.289439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.483 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.483 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:20:59.483 20:41:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:59.742 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:00.000 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3371842b-a04e-423a-87a4-414d11fcfa56 00:21:00.000 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:21:00.000 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3371842BA04E423A87A4414D11FCFA56 -i 00:21:00.000 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f3835ef7-098b-49c8-9dfe-aab89edf786b 00:21:00.000 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:21:00.001 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F3835EF7098B49C89DFEAAB89EDF786B -i 00:21:00.260 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:00.519 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:21:00.519 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:00.519 20:41:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:00.778 nvme0n1 00:21:00.778 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:00.778 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:01.037 nvme1n2 00:21:01.037 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:21:01.037 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:21:01.037 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:21:01.037 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:01.037 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:21:01.297 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:21:01.297 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:21:01.297 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:21:01.297 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:21:01.556 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3371842b-a04e-423a-87a4-414d11fcfa56 == \3\3\7\1\8\4\2\b\-\a\0\4\e\-\4\2\3\a\-\8\7\a\4\-\4\1\4\d\1\1\f\c\f\a\5\6 ]] 00:21:01.556 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:21:01.556 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:21:01.556 20:41:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f3835ef7-098b-49c8-9dfe-aab89edf786b == \f\3\8\3\5\e\f\7\-\0\9\8\b\-\4\9\c\8\-\9\d\f\e\-\a\a\b\8\9\e\d\f\7\8\6\b ]] 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1138840 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1138840 ']' 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1138840 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1138840 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1138840' 00:21:01.556 killing process with pid 1138840 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1138840 00:21:01.556 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1138840 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:02.125 rmmod nvme_rdma 00:21:02.125 rmmod nvme_fabrics 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1136563 ']' 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1136563 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1136563 ']' 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1136563 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.125 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1136563 00:21:02.385 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.385 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.385 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1136563' 00:21:02.385 killing process with pid 1136563 00:21:02.385 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1136563 00:21:02.385 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1136563 00:21:02.644 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.644 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:02.644 00:21:02.644 real 0m25.034s 00:21:02.644 user 0m26.133s 00:21:02.644 sys 0m9.051s 00:21:02.644 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:02.644 20:41:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:02.644 ************************************ 00:21:02.644 END TEST nvmf_ns_masking 00:21:02.644 ************************************ 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:02.645 ************************************ 00:21:02.645 START TEST nvmf_nvme_cli 00:21:02.645 ************************************ 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:21:02.645 * Looking for test storage... 00:21:02.645 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:21:02.645 20:41:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:10.770 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:10.770 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:10.771 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:10.771 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:10.771 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:10.771 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:10.771 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:10.771 altname enp217s0f0np0 00:21:10.771 altname ens818f0np0 00:21:10.771 inet 192.168.100.8/24 scope global mlx_0_0 00:21:10.771 valid_lft forever preferred_lft forever 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:10.771 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:10.771 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:10.771 altname enp217s0f1np1 00:21:10.771 altname ens818f1np1 00:21:10.771 inet 192.168.100.9/24 scope global mlx_0_1 00:21:10.771 valid_lft forever preferred_lft forever 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:10.771 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:11.032 192.168.100.9' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:11.032 192.168.100.9' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:11.032 192.168.100.9' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1143566 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1143566 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1143566 ']' 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.032 20:41:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.032 [2024-07-26 20:41:59.435488] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:21:11.032 [2024-07-26 20:41:59.435539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.032 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.032 [2024-07-26 20:41:59.519788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.032 [2024-07-26 20:41:59.561936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.032 [2024-07-26 20:41:59.561974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.032 [2024-07-26 20:41:59.561984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.032 [2024-07-26 20:41:59.561992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.032 [2024-07-26 20:41:59.562000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.032 [2024-07-26 20:41:59.562048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.032 [2024-07-26 20:41:59.562074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.032 [2024-07-26 20:41:59.562178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.032 [2024-07-26 20:41:59.562180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 [2024-07-26 20:42:00.311223] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb5fea0/0xb64390) succeed. 00:21:11.969 [2024-07-26 20:42:00.320436] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb614e0/0xba5a20) succeed. 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 Malloc0 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 Malloc1 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.969 [2024-07-26 20:42:00.516545] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:11.969 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.228 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:12.228 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.228 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:21:12.229 00:21:12.229 Discovery Log Number of Records 2, Generation counter 2 00:21:12.229 =====Discovery Log Entry 0====== 00:21:12.229 trtype: rdma 00:21:12.229 adrfam: ipv4 00:21:12.229 subtype: current discovery subsystem 00:21:12.229 treq: not required 00:21:12.229 portid: 0 00:21:12.229 trsvcid: 4420 00:21:12.229 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:12.229 traddr: 192.168.100.8 00:21:12.229 eflags: explicit discovery connections, duplicate discovery information 00:21:12.229 rdma_prtype: not specified 00:21:12.229 rdma_qptype: connected 00:21:12.229 rdma_cms: rdma-cm 00:21:12.229 rdma_pkey: 0x0000 00:21:12.229 =====Discovery Log Entry 1====== 00:21:12.229 trtype: rdma 00:21:12.229 adrfam: ipv4 00:21:12.229 subtype: nvme subsystem 00:21:12.229 treq: not required 00:21:12.229 portid: 0 00:21:12.229 trsvcid: 4420 00:21:12.229 subnqn: nqn.2016-06.io.spdk:cnode1 00:21:12.229 traddr: 192.168.100.8 00:21:12.229 eflags: none 00:21:12.229 rdma_prtype: not specified 00:21:12.229 rdma_qptype: connected 00:21:12.229 rdma_cms: rdma-cm 00:21:12.229 rdma_pkey: 0x0000 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:21:12.229 20:42:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:13.165 20:42:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:13.165 20:42:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:21:13.165 20:42:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:13.165 20:42:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:21:13.165 20:42:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:21:13.165 20:42:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:21:15.062 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:15.062 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:15.062 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:21:15.319 /dev/nvme0n1 ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:21:15.319 20:42:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:16.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:16.321 rmmod nvme_rdma 00:21:16.321 rmmod nvme_fabrics 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1143566 ']' 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1143566 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1143566 ']' 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1143566 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1143566 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1143566' 00:21:16.321 killing process with pid 1143566 00:21:16.321 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1143566 00:21:16.322 20:42:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1143566 00:21:16.580 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:16.580 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:16.580 00:21:16.580 real 0m14.046s 00:21:16.580 user 0m24.242s 00:21:16.580 sys 0m6.829s 00:21:16.580 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:16.580 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:16.580 ************************************ 00:21:16.580 END TEST nvmf_nvme_cli 00:21:16.580 ************************************ 00:21:16.580 20:42:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:21:16.580 20:42:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.839 ************************************ 00:21:16.839 START TEST nvmf_auth_target 00:21:16.839 ************************************ 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:21:16.839 * Looking for test storage... 00:21:16.839 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.839 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.840 20:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:24.963 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:24.963 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:24.963 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:24.963 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:24.963 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:24.964 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:24.964 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:24.964 altname enp217s0f0np0 00:21:24.964 altname ens818f0np0 00:21:24.964 inet 192.168.100.8/24 scope global mlx_0_0 00:21:24.964 valid_lft forever preferred_lft forever 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:24.964 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:24.964 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:24.964 altname enp217s0f1np1 00:21:24.964 altname ens818f1np1 00:21:24.964 inet 192.168.100.9/24 scope global mlx_0_1 00:21:24.964 valid_lft forever preferred_lft forever 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:24.964 192.168.100.9' 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:24.964 192.168.100.9' 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:24.964 192.168.100.9' 00:21:24.964 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1149121 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1149121 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1149121 ']' 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:25.223 20:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1149169 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4fbbb5503317c4b272177f39e20d71b9e80c1c742271071f 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Q12 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4fbbb5503317c4b272177f39e20d71b9e80c1c742271071f 0 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4fbbb5503317c4b272177f39e20d71b9e80c1c742271071f 0 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4fbbb5503317c4b272177f39e20d71b9e80c1c742271071f 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Q12 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Q12 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Q12 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c1086c10f72cbdb1b44e6b74fc95865cd763faa389dc77f553fa05580f956301 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2Sw 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c1086c10f72cbdb1b44e6b74fc95865cd763faa389dc77f553fa05580f956301 3 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c1086c10f72cbdb1b44e6b74fc95865cd763faa389dc77f553fa05580f956301 3 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c1086c10f72cbdb1b44e6b74fc95865cd763faa389dc77f553fa05580f956301 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2Sw 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2Sw 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.2Sw 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=272d577665d121869e3267ef7f1c56e8 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BCy 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 272d577665d121869e3267ef7f1c56e8 1 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 272d577665d121869e3267ef7f1c56e8 1 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.161 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=272d577665d121869e3267ef7f1c56e8 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BCy 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BCy 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.BCy 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b111486cf212a020b35c278e96e3118369d8377c5cc3fac6 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ycM 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b111486cf212a020b35c278e96e3118369d8377c5cc3fac6 2 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b111486cf212a020b35c278e96e3118369d8377c5cc3fac6 2 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b111486cf212a020b35c278e96e3118369d8377c5cc3fac6 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ycM 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ycM 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ycM 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:26.162 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4444cd7a22f1d9ce7fe5649df13951fde81612e59c24324b 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YYC 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4444cd7a22f1d9ce7fe5649df13951fde81612e59c24324b 2 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4444cd7a22f1d9ce7fe5649df13951fde81612e59c24324b 2 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4444cd7a22f1d9ce7fe5649df13951fde81612e59c24324b 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YYC 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YYC 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.YYC 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7b1a7e5b0f80daab5f71973d501db6a6 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ya7 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7b1a7e5b0f80daab5f71973d501db6a6 1 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7b1a7e5b0f80daab5f71973d501db6a6 1 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7b1a7e5b0f80daab5f71973d501db6a6 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ya7 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ya7 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ya7 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9ee63253f516ca662b1fce82a9f3e53ee846ad9eb011591345dea67b311ce1cf 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.BjE 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9ee63253f516ca662b1fce82a9f3e53ee846ad9eb011591345dea67b311ce1cf 3 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9ee63253f516ca662b1fce82a9f3e53ee846ad9eb011591345dea67b311ce1cf 3 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9ee63253f516ca662b1fce82a9f3e53ee846ad9eb011591345dea67b311ce1cf 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.BjE 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.BjE 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.BjE 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1149121 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1149121 ']' 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.422 20:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1149169 /var/tmp/host.sock 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1149169 ']' 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:26.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.682 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Q12 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Q12 00:21:26.941 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Q12 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.2Sw ]] 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Sw 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Sw 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Sw 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BCy 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BCy 00:21:27.201 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BCy 00:21:27.461 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ycM ]] 00:21:27.461 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ycM 00:21:27.461 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.461 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.461 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.461 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ycM 00:21:27.461 20:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ycM 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YYC 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.YYC 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.YYC 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ya7 ]] 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ya7 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ya7 00:21:27.720 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ya7 00:21:27.980 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:27.980 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.BjE 00:21:27.980 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.980 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.980 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.980 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.BjE 00:21:27.980 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.BjE 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.239 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.498 00:21:28.499 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.499 20:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.499 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.758 { 00:21:28.758 "cntlid": 1, 00:21:28.758 "qid": 0, 00:21:28.758 "state": "enabled", 00:21:28.758 "thread": "nvmf_tgt_poll_group_000", 00:21:28.758 "listen_address": { 00:21:28.758 "trtype": "RDMA", 00:21:28.758 "adrfam": "IPv4", 00:21:28.758 "traddr": "192.168.100.8", 00:21:28.758 "trsvcid": "4420" 00:21:28.758 }, 00:21:28.758 "peer_address": { 00:21:28.758 "trtype": "RDMA", 00:21:28.758 "adrfam": "IPv4", 00:21:28.758 "traddr": "192.168.100.8", 00:21:28.758 "trsvcid": "43507" 00:21:28.758 }, 00:21:28.758 "auth": { 00:21:28.758 "state": "completed", 00:21:28.758 "digest": "sha256", 00:21:28.758 "dhgroup": "null" 00:21:28.758 } 00:21:28.758 } 00:21:28.758 ]' 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:28.758 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.017 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.017 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.017 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.017 20:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:21:29.585 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.844 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:29.844 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.844 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.844 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.844 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.844 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:29.844 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.104 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.104 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.363 { 00:21:30.363 "cntlid": 3, 00:21:30.363 "qid": 0, 00:21:30.363 "state": "enabled", 00:21:30.363 "thread": "nvmf_tgt_poll_group_000", 00:21:30.363 "listen_address": { 00:21:30.363 "trtype": "RDMA", 00:21:30.363 "adrfam": "IPv4", 00:21:30.363 "traddr": "192.168.100.8", 00:21:30.363 "trsvcid": "4420" 00:21:30.363 }, 00:21:30.363 "peer_address": { 00:21:30.363 "trtype": "RDMA", 00:21:30.363 "adrfam": "IPv4", 00:21:30.363 "traddr": "192.168.100.8", 00:21:30.363 "trsvcid": "52245" 00:21:30.363 }, 00:21:30.363 "auth": { 00:21:30.363 "state": "completed", 00:21:30.363 "digest": "sha256", 00:21:30.363 "dhgroup": "null" 00:21:30.363 } 00:21:30.363 } 00:21:30.363 ]' 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:30.363 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.622 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:30.622 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.622 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.622 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.622 20:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.881 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:21:31.449 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.449 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:31.449 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.449 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.449 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.449 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.449 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:31.449 20:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.709 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.969 00:21:31.969 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.969 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.969 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.969 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.969 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.969 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.969 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.229 { 00:21:32.229 "cntlid": 5, 00:21:32.229 "qid": 0, 00:21:32.229 "state": "enabled", 00:21:32.229 "thread": "nvmf_tgt_poll_group_000", 00:21:32.229 "listen_address": { 00:21:32.229 "trtype": "RDMA", 00:21:32.229 "adrfam": "IPv4", 00:21:32.229 "traddr": "192.168.100.8", 00:21:32.229 "trsvcid": "4420" 00:21:32.229 }, 00:21:32.229 "peer_address": { 00:21:32.229 "trtype": "RDMA", 00:21:32.229 "adrfam": "IPv4", 00:21:32.229 "traddr": "192.168.100.8", 00:21:32.229 "trsvcid": "36525" 00:21:32.229 }, 00:21:32.229 "auth": { 00:21:32.229 "state": "completed", 00:21:32.229 "digest": "sha256", 00:21:32.229 "dhgroup": "null" 00:21:32.229 } 00:21:32.229 } 00:21:32.229 ]' 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.229 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.488 20:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:21:33.056 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.056 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:33.056 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.056 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.056 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.056 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.056 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:33.056 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.315 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.574 00:21:33.574 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.574 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.574 20:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.833 { 00:21:33.833 "cntlid": 7, 00:21:33.833 "qid": 0, 00:21:33.833 "state": "enabled", 00:21:33.833 "thread": "nvmf_tgt_poll_group_000", 00:21:33.833 "listen_address": { 00:21:33.833 "trtype": "RDMA", 00:21:33.833 "adrfam": "IPv4", 00:21:33.833 "traddr": "192.168.100.8", 00:21:33.833 "trsvcid": "4420" 00:21:33.833 }, 00:21:33.833 "peer_address": { 00:21:33.833 "trtype": "RDMA", 00:21:33.833 "adrfam": "IPv4", 00:21:33.833 "traddr": "192.168.100.8", 00:21:33.833 "trsvcid": "59149" 00:21:33.833 }, 00:21:33.833 "auth": { 00:21:33.833 "state": "completed", 00:21:33.833 "digest": "sha256", 00:21:33.833 "dhgroup": "null" 00:21:33.833 } 00:21:33.833 } 00:21:33.833 ]' 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.833 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.093 20:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:34.696 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.955 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.213 00:21:35.213 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.213 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.213 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.471 { 00:21:35.471 "cntlid": 9, 00:21:35.471 "qid": 0, 00:21:35.471 "state": "enabled", 00:21:35.471 "thread": "nvmf_tgt_poll_group_000", 00:21:35.471 "listen_address": { 00:21:35.471 "trtype": "RDMA", 00:21:35.471 "adrfam": "IPv4", 00:21:35.471 "traddr": "192.168.100.8", 00:21:35.471 "trsvcid": "4420" 00:21:35.471 }, 00:21:35.471 "peer_address": { 00:21:35.471 "trtype": "RDMA", 00:21:35.471 "adrfam": "IPv4", 00:21:35.471 "traddr": "192.168.100.8", 00:21:35.471 "trsvcid": "33789" 00:21:35.471 }, 00:21:35.471 "auth": { 00:21:35.471 "state": "completed", 00:21:35.471 "digest": "sha256", 00:21:35.471 "dhgroup": "ffdhe2048" 00:21:35.471 } 00:21:35.471 } 00:21:35.471 ]' 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.471 20:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.730 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:21:36.297 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.556 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:36.556 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.556 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.556 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.556 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.556 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:36.556 20:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.556 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.814 00:21:36.814 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.814 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.814 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.073 { 00:21:37.073 "cntlid": 11, 00:21:37.073 "qid": 0, 00:21:37.073 "state": "enabled", 00:21:37.073 "thread": "nvmf_tgt_poll_group_000", 00:21:37.073 "listen_address": { 00:21:37.073 "trtype": "RDMA", 00:21:37.073 "adrfam": "IPv4", 00:21:37.073 "traddr": "192.168.100.8", 00:21:37.073 "trsvcid": "4420" 00:21:37.073 }, 00:21:37.073 "peer_address": { 00:21:37.073 "trtype": "RDMA", 00:21:37.073 "adrfam": "IPv4", 00:21:37.073 "traddr": "192.168.100.8", 00:21:37.073 "trsvcid": "57408" 00:21:37.073 }, 00:21:37.073 "auth": { 00:21:37.073 "state": "completed", 00:21:37.073 "digest": "sha256", 00:21:37.073 "dhgroup": "ffdhe2048" 00:21:37.073 } 00:21:37.073 } 00:21:37.073 ]' 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:37.073 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.331 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.331 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.332 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.332 20:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:21:37.899 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.157 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.157 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.157 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.157 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.157 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.157 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:38.157 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.416 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.675 00:21:38.675 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.675 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.675 20:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.675 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.675 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.675 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.675 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.675 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.675 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.675 { 00:21:38.675 "cntlid": 13, 00:21:38.675 "qid": 0, 00:21:38.675 "state": "enabled", 00:21:38.675 "thread": "nvmf_tgt_poll_group_000", 00:21:38.675 "listen_address": { 00:21:38.675 "trtype": "RDMA", 00:21:38.675 "adrfam": "IPv4", 00:21:38.675 "traddr": "192.168.100.8", 00:21:38.675 "trsvcid": "4420" 00:21:38.675 }, 00:21:38.675 "peer_address": { 00:21:38.675 "trtype": "RDMA", 00:21:38.675 "adrfam": "IPv4", 00:21:38.675 "traddr": "192.168.100.8", 00:21:38.675 "trsvcid": "37578" 00:21:38.675 }, 00:21:38.675 "auth": { 00:21:38.675 "state": "completed", 00:21:38.675 "digest": "sha256", 00:21:38.675 "dhgroup": "ffdhe2048" 00:21:38.675 } 00:21:38.675 } 00:21:38.675 ]' 00:21:38.675 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.933 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.933 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.933 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.933 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.933 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.933 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.933 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.192 20:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:21:39.760 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.760 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:39.760 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.760 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.760 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.760 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.760 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:39.760 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.019 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.278 00:21:40.278 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.278 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.278 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.537 { 00:21:40.537 "cntlid": 15, 00:21:40.537 "qid": 0, 00:21:40.537 "state": "enabled", 00:21:40.537 "thread": "nvmf_tgt_poll_group_000", 00:21:40.537 "listen_address": { 00:21:40.537 "trtype": "RDMA", 00:21:40.537 "adrfam": "IPv4", 00:21:40.537 "traddr": "192.168.100.8", 00:21:40.537 "trsvcid": "4420" 00:21:40.537 }, 00:21:40.537 "peer_address": { 00:21:40.537 "trtype": "RDMA", 00:21:40.537 "adrfam": "IPv4", 00:21:40.537 "traddr": "192.168.100.8", 00:21:40.537 "trsvcid": "54007" 00:21:40.537 }, 00:21:40.537 "auth": { 00:21:40.537 "state": "completed", 00:21:40.537 "digest": "sha256", 00:21:40.537 "dhgroup": "ffdhe2048" 00:21:40.537 } 00:21:40.537 } 00:21:40.537 ]' 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.537 20:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.537 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.537 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.537 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.796 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:21:41.363 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.363 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:41.363 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.363 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.622 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.622 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.622 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.622 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.622 20:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.622 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.882 00:21:41.882 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.882 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.882 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.141 { 00:21:42.141 "cntlid": 17, 00:21:42.141 "qid": 0, 00:21:42.141 "state": "enabled", 00:21:42.141 "thread": "nvmf_tgt_poll_group_000", 00:21:42.141 "listen_address": { 00:21:42.141 "trtype": "RDMA", 00:21:42.141 "adrfam": "IPv4", 00:21:42.141 "traddr": "192.168.100.8", 00:21:42.141 "trsvcid": "4420" 00:21:42.141 }, 00:21:42.141 "peer_address": { 00:21:42.141 "trtype": "RDMA", 00:21:42.141 "adrfam": "IPv4", 00:21:42.141 "traddr": "192.168.100.8", 00:21:42.141 "trsvcid": "56111" 00:21:42.141 }, 00:21:42.141 "auth": { 00:21:42.141 "state": "completed", 00:21:42.141 "digest": "sha256", 00:21:42.141 "dhgroup": "ffdhe3072" 00:21:42.141 } 00:21:42.141 } 00:21:42.141 ]' 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:42.141 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.400 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.400 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.400 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.400 20:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:21:42.968 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.227 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:43.227 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.227 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.228 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.228 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.228 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.228 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.487 20:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.747 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.747 { 00:21:43.747 "cntlid": 19, 00:21:43.747 "qid": 0, 00:21:43.747 "state": "enabled", 00:21:43.747 "thread": "nvmf_tgt_poll_group_000", 00:21:43.747 "listen_address": { 00:21:43.747 "trtype": "RDMA", 00:21:43.747 "adrfam": "IPv4", 00:21:43.747 "traddr": "192.168.100.8", 00:21:43.747 "trsvcid": "4420" 00:21:43.747 }, 00:21:43.747 "peer_address": { 00:21:43.747 "trtype": "RDMA", 00:21:43.747 "adrfam": "IPv4", 00:21:43.747 "traddr": "192.168.100.8", 00:21:43.747 "trsvcid": "43439" 00:21:43.747 }, 00:21:43.747 "auth": { 00:21:43.747 "state": "completed", 00:21:43.747 "digest": "sha256", 00:21:43.747 "dhgroup": "ffdhe3072" 00:21:43.747 } 00:21:43.747 } 00:21:43.747 ]' 00:21:43.747 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.010 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.010 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.010 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.010 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.010 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.010 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.010 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.270 20:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:21:44.838 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.838 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:44.838 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.838 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.838 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.838 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.838 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:44.838 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.097 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.356 00:21:45.356 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.356 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.356 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.615 { 00:21:45.615 "cntlid": 21, 00:21:45.615 "qid": 0, 00:21:45.615 "state": "enabled", 00:21:45.615 "thread": "nvmf_tgt_poll_group_000", 00:21:45.615 "listen_address": { 00:21:45.615 "trtype": "RDMA", 00:21:45.615 "adrfam": "IPv4", 00:21:45.615 "traddr": "192.168.100.8", 00:21:45.615 "trsvcid": "4420" 00:21:45.615 }, 00:21:45.615 "peer_address": { 00:21:45.615 "trtype": "RDMA", 00:21:45.615 "adrfam": "IPv4", 00:21:45.615 "traddr": "192.168.100.8", 00:21:45.615 "trsvcid": "36282" 00:21:45.615 }, 00:21:45.615 "auth": { 00:21:45.615 "state": "completed", 00:21:45.615 "digest": "sha256", 00:21:45.615 "dhgroup": "ffdhe3072" 00:21:45.615 } 00:21:45.615 } 00:21:45.615 ]' 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.615 20:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.615 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.615 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.615 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.615 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.615 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.875 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:21:46.443 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.443 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:46.443 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.443 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.443 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.444 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.444 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:46.444 20:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.703 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.961 00:21:46.961 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.961 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.961 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.220 { 00:21:47.220 "cntlid": 23, 00:21:47.220 "qid": 0, 00:21:47.220 "state": "enabled", 00:21:47.220 "thread": "nvmf_tgt_poll_group_000", 00:21:47.220 "listen_address": { 00:21:47.220 "trtype": "RDMA", 00:21:47.220 "adrfam": "IPv4", 00:21:47.220 "traddr": "192.168.100.8", 00:21:47.220 "trsvcid": "4420" 00:21:47.220 }, 00:21:47.220 "peer_address": { 00:21:47.220 "trtype": "RDMA", 00:21:47.220 "adrfam": "IPv4", 00:21:47.220 "traddr": "192.168.100.8", 00:21:47.220 "trsvcid": "37636" 00:21:47.220 }, 00:21:47.220 "auth": { 00:21:47.220 "state": "completed", 00:21:47.220 "digest": "sha256", 00:21:47.220 "dhgroup": "ffdhe3072" 00:21:47.220 } 00:21:47.220 } 00:21:47.220 ]' 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.220 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.479 20:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:21:48.047 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.307 20:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.566 00:21:48.566 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.566 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.566 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.892 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.892 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.892 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.892 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.892 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.892 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.892 { 00:21:48.892 "cntlid": 25, 00:21:48.892 "qid": 0, 00:21:48.892 "state": "enabled", 00:21:48.892 "thread": "nvmf_tgt_poll_group_000", 00:21:48.892 "listen_address": { 00:21:48.892 "trtype": "RDMA", 00:21:48.892 "adrfam": "IPv4", 00:21:48.892 "traddr": "192.168.100.8", 00:21:48.892 "trsvcid": "4420" 00:21:48.892 }, 00:21:48.892 "peer_address": { 00:21:48.892 "trtype": "RDMA", 00:21:48.892 "adrfam": "IPv4", 00:21:48.892 "traddr": "192.168.100.8", 00:21:48.892 "trsvcid": "41396" 00:21:48.892 }, 00:21:48.892 "auth": { 00:21:48.892 "state": "completed", 00:21:48.892 "digest": "sha256", 00:21:48.892 "dhgroup": "ffdhe4096" 00:21:48.892 } 00:21:48.892 } 00:21:48.892 ]' 00:21:48.892 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.892 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:48.893 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.893 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.893 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.151 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.151 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.152 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.152 20:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:21:49.719 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.978 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.237 00:21:50.495 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.495 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.496 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.496 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.496 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.496 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.496 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.496 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.496 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.496 { 00:21:50.496 "cntlid": 27, 00:21:50.496 "qid": 0, 00:21:50.496 "state": "enabled", 00:21:50.496 "thread": "nvmf_tgt_poll_group_000", 00:21:50.496 "listen_address": { 00:21:50.496 "trtype": "RDMA", 00:21:50.496 "adrfam": "IPv4", 00:21:50.496 "traddr": "192.168.100.8", 00:21:50.496 "trsvcid": "4420" 00:21:50.496 }, 00:21:50.496 "peer_address": { 00:21:50.496 "trtype": "RDMA", 00:21:50.496 "adrfam": "IPv4", 00:21:50.496 "traddr": "192.168.100.8", 00:21:50.496 "trsvcid": "48737" 00:21:50.496 }, 00:21:50.496 "auth": { 00:21:50.496 "state": "completed", 00:21:50.496 "digest": "sha256", 00:21:50.496 "dhgroup": "ffdhe4096" 00:21:50.496 } 00:21:50.496 } 00:21:50.496 ]' 00:21:50.496 20:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.496 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.496 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.755 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.755 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.755 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.755 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.755 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.755 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:21:51.691 20:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.691 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.952 00:21:51.952 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.952 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.952 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.210 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.211 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.211 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.211 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.211 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.211 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.211 { 00:21:52.211 "cntlid": 29, 00:21:52.211 "qid": 0, 00:21:52.211 "state": "enabled", 00:21:52.211 "thread": "nvmf_tgt_poll_group_000", 00:21:52.211 "listen_address": { 00:21:52.211 "trtype": "RDMA", 00:21:52.211 "adrfam": "IPv4", 00:21:52.211 "traddr": "192.168.100.8", 00:21:52.211 "trsvcid": "4420" 00:21:52.211 }, 00:21:52.211 "peer_address": { 00:21:52.211 "trtype": "RDMA", 00:21:52.211 "adrfam": "IPv4", 00:21:52.211 "traddr": "192.168.100.8", 00:21:52.211 "trsvcid": "48039" 00:21:52.211 }, 00:21:52.211 "auth": { 00:21:52.211 "state": "completed", 00:21:52.211 "digest": "sha256", 00:21:52.211 "dhgroup": "ffdhe4096" 00:21:52.211 } 00:21:52.211 } 00:21:52.211 ]' 00:21:52.211 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.211 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.211 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.469 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.469 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.469 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.469 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.469 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.469 20:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.404 20:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.663 00:21:53.663 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.663 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.663 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.922 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.922 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.922 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.922 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.922 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.922 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.922 { 00:21:53.922 "cntlid": 31, 00:21:53.922 "qid": 0, 00:21:53.922 "state": "enabled", 00:21:53.922 "thread": "nvmf_tgt_poll_group_000", 00:21:53.922 "listen_address": { 00:21:53.922 "trtype": "RDMA", 00:21:53.922 "adrfam": "IPv4", 00:21:53.922 "traddr": "192.168.100.8", 00:21:53.922 "trsvcid": "4420" 00:21:53.922 }, 00:21:53.922 "peer_address": { 00:21:53.922 "trtype": "RDMA", 00:21:53.922 "adrfam": "IPv4", 00:21:53.922 "traddr": "192.168.100.8", 00:21:53.922 "trsvcid": "57749" 00:21:53.922 }, 00:21:53.923 "auth": { 00:21:53.923 "state": "completed", 00:21:53.923 "digest": "sha256", 00:21:53.923 "dhgroup": "ffdhe4096" 00:21:53.923 } 00:21:53.923 } 00:21:53.923 ]' 00:21:53.923 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.923 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:53.923 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.923 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.923 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.183 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.183 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.183 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.183 20:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:21:54.751 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.010 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:55.010 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.010 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.010 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.010 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.010 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.010 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:55.010 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:55.268 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.269 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.527 00:21:55.527 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.527 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.527 20:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.786 { 00:21:55.786 "cntlid": 33, 00:21:55.786 "qid": 0, 00:21:55.786 "state": "enabled", 00:21:55.786 "thread": "nvmf_tgt_poll_group_000", 00:21:55.786 "listen_address": { 00:21:55.786 "trtype": "RDMA", 00:21:55.786 "adrfam": "IPv4", 00:21:55.786 "traddr": "192.168.100.8", 00:21:55.786 "trsvcid": "4420" 00:21:55.786 }, 00:21:55.786 "peer_address": { 00:21:55.786 "trtype": "RDMA", 00:21:55.786 "adrfam": "IPv4", 00:21:55.786 "traddr": "192.168.100.8", 00:21:55.786 "trsvcid": "50635" 00:21:55.786 }, 00:21:55.786 "auth": { 00:21:55.786 "state": "completed", 00:21:55.786 "digest": "sha256", 00:21:55.786 "dhgroup": "ffdhe6144" 00:21:55.786 } 00:21:55.786 } 00:21:55.786 ]' 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.786 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.045 20:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:21:56.612 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.612 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:56.612 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.612 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.873 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.132 00:21:57.132 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.391 { 00:21:57.391 "cntlid": 35, 00:21:57.391 "qid": 0, 00:21:57.391 "state": "enabled", 00:21:57.391 "thread": "nvmf_tgt_poll_group_000", 00:21:57.391 "listen_address": { 00:21:57.391 "trtype": "RDMA", 00:21:57.391 "adrfam": "IPv4", 00:21:57.391 "traddr": "192.168.100.8", 00:21:57.391 "trsvcid": "4420" 00:21:57.391 }, 00:21:57.391 "peer_address": { 00:21:57.391 "trtype": "RDMA", 00:21:57.391 "adrfam": "IPv4", 00:21:57.391 "traddr": "192.168.100.8", 00:21:57.391 "trsvcid": "55838" 00:21:57.391 }, 00:21:57.391 "auth": { 00:21:57.391 "state": "completed", 00:21:57.391 "digest": "sha256", 00:21:57.391 "dhgroup": "ffdhe6144" 00:21:57.391 } 00:21:57.391 } 00:21:57.391 ]' 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.391 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.650 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.650 20:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.650 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.650 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.650 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.650 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:21:58.587 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.587 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:58.587 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.587 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.587 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.587 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.587 20:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.587 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.155 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.155 { 00:21:59.155 "cntlid": 37, 00:21:59.155 "qid": 0, 00:21:59.155 "state": "enabled", 00:21:59.155 "thread": "nvmf_tgt_poll_group_000", 00:21:59.155 "listen_address": { 00:21:59.155 "trtype": "RDMA", 00:21:59.155 "adrfam": "IPv4", 00:21:59.155 "traddr": "192.168.100.8", 00:21:59.155 "trsvcid": "4420" 00:21:59.155 }, 00:21:59.155 "peer_address": { 00:21:59.155 "trtype": "RDMA", 00:21:59.155 "adrfam": "IPv4", 00:21:59.155 "traddr": "192.168.100.8", 00:21:59.155 "trsvcid": "54751" 00:21:59.155 }, 00:21:59.155 "auth": { 00:21:59.155 "state": "completed", 00:21:59.155 "digest": "sha256", 00:21:59.155 "dhgroup": "ffdhe6144" 00:21:59.155 } 00:21:59.155 } 00:21:59.155 ]' 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.155 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.414 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.414 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.414 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.414 20:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:21:59.983 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.242 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:00.242 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.242 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.242 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.242 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.242 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:00.242 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.501 20:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.761 00:22:00.761 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.761 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.761 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.019 { 00:22:01.019 "cntlid": 39, 00:22:01.019 "qid": 0, 00:22:01.019 "state": "enabled", 00:22:01.019 "thread": "nvmf_tgt_poll_group_000", 00:22:01.019 "listen_address": { 00:22:01.019 "trtype": "RDMA", 00:22:01.019 "adrfam": "IPv4", 00:22:01.019 "traddr": "192.168.100.8", 00:22:01.019 "trsvcid": "4420" 00:22:01.019 }, 00:22:01.019 "peer_address": { 00:22:01.019 "trtype": "RDMA", 00:22:01.019 "adrfam": "IPv4", 00:22:01.019 "traddr": "192.168.100.8", 00:22:01.019 "trsvcid": "35652" 00:22:01.019 }, 00:22:01.019 "auth": { 00:22:01.019 "state": "completed", 00:22:01.019 "digest": "sha256", 00:22:01.019 "dhgroup": "ffdhe6144" 00:22:01.019 } 00:22:01.019 } 00:22:01.019 ]' 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.019 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.278 20:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:01.846 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.106 20:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.675 00:22:02.675 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.675 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.675 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.675 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.675 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.675 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.675 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.934 { 00:22:02.934 "cntlid": 41, 00:22:02.934 "qid": 0, 00:22:02.934 "state": "enabled", 00:22:02.934 "thread": "nvmf_tgt_poll_group_000", 00:22:02.934 "listen_address": { 00:22:02.934 "trtype": "RDMA", 00:22:02.934 "adrfam": "IPv4", 00:22:02.934 "traddr": "192.168.100.8", 00:22:02.934 "trsvcid": "4420" 00:22:02.934 }, 00:22:02.934 "peer_address": { 00:22:02.934 "trtype": "RDMA", 00:22:02.934 "adrfam": "IPv4", 00:22:02.934 "traddr": "192.168.100.8", 00:22:02.934 "trsvcid": "38298" 00:22:02.934 }, 00:22:02.934 "auth": { 00:22:02.934 "state": "completed", 00:22:02.934 "digest": "sha256", 00:22:02.934 "dhgroup": "ffdhe8192" 00:22:02.934 } 00:22:02.934 } 00:22:02.934 ]' 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.934 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.194 20:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:03.858 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.858 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:03.858 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.858 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.858 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.858 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.858 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:03.858 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.117 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.377 00:22:04.377 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.377 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.377 20:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.636 { 00:22:04.636 "cntlid": 43, 00:22:04.636 "qid": 0, 00:22:04.636 "state": "enabled", 00:22:04.636 "thread": "nvmf_tgt_poll_group_000", 00:22:04.636 "listen_address": { 00:22:04.636 "trtype": "RDMA", 00:22:04.636 "adrfam": "IPv4", 00:22:04.636 "traddr": "192.168.100.8", 00:22:04.636 "trsvcid": "4420" 00:22:04.636 }, 00:22:04.636 "peer_address": { 00:22:04.636 "trtype": "RDMA", 00:22:04.636 "adrfam": "IPv4", 00:22:04.636 "traddr": "192.168.100.8", 00:22:04.636 "trsvcid": "37915" 00:22:04.636 }, 00:22:04.636 "auth": { 00:22:04.636 "state": "completed", 00:22:04.636 "digest": "sha256", 00:22:04.636 "dhgroup": "ffdhe8192" 00:22:04.636 } 00:22:04.636 } 00:22:04.636 ]' 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.636 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.895 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.895 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.895 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.895 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:22:05.461 20:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.720 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:05.720 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.720 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.720 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.720 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.720 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.720 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.979 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.238 00:22:06.238 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.238 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.238 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.496 { 00:22:06.496 "cntlid": 45, 00:22:06.496 "qid": 0, 00:22:06.496 "state": "enabled", 00:22:06.496 "thread": "nvmf_tgt_poll_group_000", 00:22:06.496 "listen_address": { 00:22:06.496 "trtype": "RDMA", 00:22:06.496 "adrfam": "IPv4", 00:22:06.496 "traddr": "192.168.100.8", 00:22:06.496 "trsvcid": "4420" 00:22:06.496 }, 00:22:06.496 "peer_address": { 00:22:06.496 "trtype": "RDMA", 00:22:06.496 "adrfam": "IPv4", 00:22:06.496 "traddr": "192.168.100.8", 00:22:06.496 "trsvcid": "55274" 00:22:06.496 }, 00:22:06.496 "auth": { 00:22:06.496 "state": "completed", 00:22:06.496 "digest": "sha256", 00:22:06.496 "dhgroup": "ffdhe8192" 00:22:06.496 } 00:22:06.496 } 00:22:06.496 ]' 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.496 20:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.496 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.496 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.755 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.755 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.755 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.756 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:22:07.323 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.582 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:07.582 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.582 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.582 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.582 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.582 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.582 20:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.842 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.101 00:22:08.101 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.101 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.101 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.361 { 00:22:08.361 "cntlid": 47, 00:22:08.361 "qid": 0, 00:22:08.361 "state": "enabled", 00:22:08.361 "thread": "nvmf_tgt_poll_group_000", 00:22:08.361 "listen_address": { 00:22:08.361 "trtype": "RDMA", 00:22:08.361 "adrfam": "IPv4", 00:22:08.361 "traddr": "192.168.100.8", 00:22:08.361 "trsvcid": "4420" 00:22:08.361 }, 00:22:08.361 "peer_address": { 00:22:08.361 "trtype": "RDMA", 00:22:08.361 "adrfam": "IPv4", 00:22:08.361 "traddr": "192.168.100.8", 00:22:08.361 "trsvcid": "38202" 00:22:08.361 }, 00:22:08.361 "auth": { 00:22:08.361 "state": "completed", 00:22:08.361 "digest": "sha256", 00:22:08.361 "dhgroup": "ffdhe8192" 00:22:08.361 } 00:22:08.361 } 00:22:08.361 ]' 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:08.361 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.620 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.620 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.620 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.620 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.620 20:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.620 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:09.189 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.448 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:09.448 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.448 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.448 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.448 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:09.448 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.448 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.448 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:09.449 20:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.708 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.967 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.967 { 00:22:09.967 "cntlid": 49, 00:22:09.967 "qid": 0, 00:22:09.967 "state": "enabled", 00:22:09.967 "thread": "nvmf_tgt_poll_group_000", 00:22:09.967 "listen_address": { 00:22:09.967 "trtype": "RDMA", 00:22:09.967 "adrfam": "IPv4", 00:22:09.967 "traddr": "192.168.100.8", 00:22:09.967 "trsvcid": "4420" 00:22:09.967 }, 00:22:09.967 "peer_address": { 00:22:09.967 "trtype": "RDMA", 00:22:09.967 "adrfam": "IPv4", 00:22:09.967 "traddr": "192.168.100.8", 00:22:09.967 "trsvcid": "37744" 00:22:09.967 }, 00:22:09.967 "auth": { 00:22:09.967 "state": "completed", 00:22:09.967 "digest": "sha384", 00:22:09.967 "dhgroup": "null" 00:22:09.967 } 00:22:09.967 } 00:22:09.967 ]' 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.967 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:10.227 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.227 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:10.227 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.227 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.227 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.227 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.486 20:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:11.055 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.055 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:11.055 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.055 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.055 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.055 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.055 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:11.055 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.314 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.574 00:22:11.574 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.574 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.574 20:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.833 { 00:22:11.833 "cntlid": 51, 00:22:11.833 "qid": 0, 00:22:11.833 "state": "enabled", 00:22:11.833 "thread": "nvmf_tgt_poll_group_000", 00:22:11.833 "listen_address": { 00:22:11.833 "trtype": "RDMA", 00:22:11.833 "adrfam": "IPv4", 00:22:11.833 "traddr": "192.168.100.8", 00:22:11.833 "trsvcid": "4420" 00:22:11.833 }, 00:22:11.833 "peer_address": { 00:22:11.833 "trtype": "RDMA", 00:22:11.833 "adrfam": "IPv4", 00:22:11.833 "traddr": "192.168.100.8", 00:22:11.833 "trsvcid": "44248" 00:22:11.833 }, 00:22:11.833 "auth": { 00:22:11.833 "state": "completed", 00:22:11.833 "digest": "sha384", 00:22:11.833 "dhgroup": "null" 00:22:11.833 } 00:22:11.833 } 00:22:11.833 ]' 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.833 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.092 20:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:22:12.661 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.661 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:12.661 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.661 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.661 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.661 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.661 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:12.661 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:12.920 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:22:12.920 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.920 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:12.920 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:12.920 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:12.920 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.921 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.921 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.921 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.921 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.921 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.921 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.181 00:22:13.181 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.181 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.181 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.441 { 00:22:13.441 "cntlid": 53, 00:22:13.441 "qid": 0, 00:22:13.441 "state": "enabled", 00:22:13.441 "thread": "nvmf_tgt_poll_group_000", 00:22:13.441 "listen_address": { 00:22:13.441 "trtype": "RDMA", 00:22:13.441 "adrfam": "IPv4", 00:22:13.441 "traddr": "192.168.100.8", 00:22:13.441 "trsvcid": "4420" 00:22:13.441 }, 00:22:13.441 "peer_address": { 00:22:13.441 "trtype": "RDMA", 00:22:13.441 "adrfam": "IPv4", 00:22:13.441 "traddr": "192.168.100.8", 00:22:13.441 "trsvcid": "59735" 00:22:13.441 }, 00:22:13.441 "auth": { 00:22:13.441 "state": "completed", 00:22:13.441 "digest": "sha384", 00:22:13.441 "dhgroup": "null" 00:22:13.441 } 00:22:13.441 } 00:22:13.441 ]' 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.441 20:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.700 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:22:14.269 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.528 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:14.528 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.529 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.529 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.529 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.529 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:14.529 20:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.529 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.788 00:22:14.788 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.788 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.788 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.047 { 00:22:15.047 "cntlid": 55, 00:22:15.047 "qid": 0, 00:22:15.047 "state": "enabled", 00:22:15.047 "thread": "nvmf_tgt_poll_group_000", 00:22:15.047 "listen_address": { 00:22:15.047 "trtype": "RDMA", 00:22:15.047 "adrfam": "IPv4", 00:22:15.047 "traddr": "192.168.100.8", 00:22:15.047 "trsvcid": "4420" 00:22:15.047 }, 00:22:15.047 "peer_address": { 00:22:15.047 "trtype": "RDMA", 00:22:15.047 "adrfam": "IPv4", 00:22:15.047 "traddr": "192.168.100.8", 00:22:15.047 "trsvcid": "52824" 00:22:15.047 }, 00:22:15.047 "auth": { 00:22:15.047 "state": "completed", 00:22:15.047 "digest": "sha384", 00:22:15.047 "dhgroup": "null" 00:22:15.047 } 00:22:15.047 } 00:22:15.047 ]' 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:15.047 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.306 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.306 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.306 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.306 20:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:15.874 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.133 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:16.133 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.133 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.133 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.133 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:16.133 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.133 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:16.133 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.392 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.392 00:22:16.651 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.651 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.651 20:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.651 { 00:22:16.651 "cntlid": 57, 00:22:16.651 "qid": 0, 00:22:16.651 "state": "enabled", 00:22:16.651 "thread": "nvmf_tgt_poll_group_000", 00:22:16.651 "listen_address": { 00:22:16.651 "trtype": "RDMA", 00:22:16.651 "adrfam": "IPv4", 00:22:16.651 "traddr": "192.168.100.8", 00:22:16.651 "trsvcid": "4420" 00:22:16.651 }, 00:22:16.651 "peer_address": { 00:22:16.651 "trtype": "RDMA", 00:22:16.651 "adrfam": "IPv4", 00:22:16.651 "traddr": "192.168.100.8", 00:22:16.651 "trsvcid": "47476" 00:22:16.651 }, 00:22:16.651 "auth": { 00:22:16.651 "state": "completed", 00:22:16.651 "digest": "sha384", 00:22:16.651 "dhgroup": "ffdhe2048" 00:22:16.651 } 00:22:16.651 } 00:22:16.651 ]' 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.651 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.910 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:16.910 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.910 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.910 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.910 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.910 20:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:17.844 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:17.845 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.845 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.845 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.845 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.845 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.845 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.845 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.103 00:22:18.103 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.103 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.103 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.362 { 00:22:18.362 "cntlid": 59, 00:22:18.362 "qid": 0, 00:22:18.362 "state": "enabled", 00:22:18.362 "thread": "nvmf_tgt_poll_group_000", 00:22:18.362 "listen_address": { 00:22:18.362 "trtype": "RDMA", 00:22:18.362 "adrfam": "IPv4", 00:22:18.362 "traddr": "192.168.100.8", 00:22:18.362 "trsvcid": "4420" 00:22:18.362 }, 00:22:18.362 "peer_address": { 00:22:18.362 "trtype": "RDMA", 00:22:18.362 "adrfam": "IPv4", 00:22:18.362 "traddr": "192.168.100.8", 00:22:18.362 "trsvcid": "54019" 00:22:18.362 }, 00:22:18.362 "auth": { 00:22:18.362 "state": "completed", 00:22:18.362 "digest": "sha384", 00:22:18.362 "dhgroup": "ffdhe2048" 00:22:18.362 } 00:22:18.362 } 00:22:18.362 ]' 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:18.362 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.621 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.621 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.621 20:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.621 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:22:19.188 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.447 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:19.447 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.447 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.447 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.447 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.447 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.447 20:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.705 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:22:19.705 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.705 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:19.705 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:19.706 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:19.706 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.706 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.706 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.706 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.706 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.706 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.706 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.706 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.964 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.964 { 00:22:19.964 "cntlid": 61, 00:22:19.964 "qid": 0, 00:22:19.964 "state": "enabled", 00:22:19.964 "thread": "nvmf_tgt_poll_group_000", 00:22:19.964 "listen_address": { 00:22:19.964 "trtype": "RDMA", 00:22:19.964 "adrfam": "IPv4", 00:22:19.964 "traddr": "192.168.100.8", 00:22:19.964 "trsvcid": "4420" 00:22:19.965 }, 00:22:19.965 "peer_address": { 00:22:19.965 "trtype": "RDMA", 00:22:19.965 "adrfam": "IPv4", 00:22:19.965 "traddr": "192.168.100.8", 00:22:19.965 "trsvcid": "60095" 00:22:19.965 }, 00:22:19.965 "auth": { 00:22:19.965 "state": "completed", 00:22:19.965 "digest": "sha384", 00:22:19.965 "dhgroup": "ffdhe2048" 00:22:19.965 } 00:22:19.965 } 00:22:19.965 ]' 00:22:19.965 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.965 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.965 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.223 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:20.223 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.223 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.223 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.223 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.223 20:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.161 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.420 00:22:21.420 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.420 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.420 20:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.679 { 00:22:21.679 "cntlid": 63, 00:22:21.679 "qid": 0, 00:22:21.679 "state": "enabled", 00:22:21.679 "thread": "nvmf_tgt_poll_group_000", 00:22:21.679 "listen_address": { 00:22:21.679 "trtype": "RDMA", 00:22:21.679 "adrfam": "IPv4", 00:22:21.679 "traddr": "192.168.100.8", 00:22:21.679 "trsvcid": "4420" 00:22:21.679 }, 00:22:21.679 "peer_address": { 00:22:21.679 "trtype": "RDMA", 00:22:21.679 "adrfam": "IPv4", 00:22:21.679 "traddr": "192.168.100.8", 00:22:21.679 "trsvcid": "46996" 00:22:21.679 }, 00:22:21.679 "auth": { 00:22:21.679 "state": "completed", 00:22:21.679 "digest": "sha384", 00:22:21.679 "dhgroup": "ffdhe2048" 00:22:21.679 } 00:22:21.679 } 00:22:21.679 ]' 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.679 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.939 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:22.507 20:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.766 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:22.766 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.766 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.766 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.766 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.766 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.766 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.767 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.026 00:22:23.026 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.026 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.026 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.285 { 00:22:23.285 "cntlid": 65, 00:22:23.285 "qid": 0, 00:22:23.285 "state": "enabled", 00:22:23.285 "thread": "nvmf_tgt_poll_group_000", 00:22:23.285 "listen_address": { 00:22:23.285 "trtype": "RDMA", 00:22:23.285 "adrfam": "IPv4", 00:22:23.285 "traddr": "192.168.100.8", 00:22:23.285 "trsvcid": "4420" 00:22:23.285 }, 00:22:23.285 "peer_address": { 00:22:23.285 "trtype": "RDMA", 00:22:23.285 "adrfam": "IPv4", 00:22:23.285 "traddr": "192.168.100.8", 00:22:23.285 "trsvcid": "55275" 00:22:23.285 }, 00:22:23.285 "auth": { 00:22:23.285 "state": "completed", 00:22:23.285 "digest": "sha384", 00:22:23.285 "dhgroup": "ffdhe3072" 00:22:23.285 } 00:22:23.285 } 00:22:23.285 ]' 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:23.285 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.543 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.543 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.543 20:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.543 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.477 20:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.736 00:22:24.736 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.736 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.736 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.996 { 00:22:24.996 "cntlid": 67, 00:22:24.996 "qid": 0, 00:22:24.996 "state": "enabled", 00:22:24.996 "thread": "nvmf_tgt_poll_group_000", 00:22:24.996 "listen_address": { 00:22:24.996 "trtype": "RDMA", 00:22:24.996 "adrfam": "IPv4", 00:22:24.996 "traddr": "192.168.100.8", 00:22:24.996 "trsvcid": "4420" 00:22:24.996 }, 00:22:24.996 "peer_address": { 00:22:24.996 "trtype": "RDMA", 00:22:24.996 "adrfam": "IPv4", 00:22:24.996 "traddr": "192.168.100.8", 00:22:24.996 "trsvcid": "48200" 00:22:24.996 }, 00:22:24.996 "auth": { 00:22:24.996 "state": "completed", 00:22:24.996 "digest": "sha384", 00:22:24.996 "dhgroup": "ffdhe3072" 00:22:24.996 } 00:22:24.996 } 00:22:24.996 ]' 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.996 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.255 20:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:22:25.883 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.142 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.400 00:22:26.400 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.400 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.400 20:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.659 { 00:22:26.659 "cntlid": 69, 00:22:26.659 "qid": 0, 00:22:26.659 "state": "enabled", 00:22:26.659 "thread": "nvmf_tgt_poll_group_000", 00:22:26.659 "listen_address": { 00:22:26.659 "trtype": "RDMA", 00:22:26.659 "adrfam": "IPv4", 00:22:26.659 "traddr": "192.168.100.8", 00:22:26.659 "trsvcid": "4420" 00:22:26.659 }, 00:22:26.659 "peer_address": { 00:22:26.659 "trtype": "RDMA", 00:22:26.659 "adrfam": "IPv4", 00:22:26.659 "traddr": "192.168.100.8", 00:22:26.659 "trsvcid": "38021" 00:22:26.659 }, 00:22:26.659 "auth": { 00:22:26.659 "state": "completed", 00:22:26.659 "digest": "sha384", 00:22:26.659 "dhgroup": "ffdhe3072" 00:22:26.659 } 00:22:26.659 } 00:22:26.659 ]' 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.659 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.917 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:22:27.485 20:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.743 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.002 00:22:28.002 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.002 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.002 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.260 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.260 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.260 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.260 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.260 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.260 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.260 { 00:22:28.260 "cntlid": 71, 00:22:28.260 "qid": 0, 00:22:28.260 "state": "enabled", 00:22:28.260 "thread": "nvmf_tgt_poll_group_000", 00:22:28.261 "listen_address": { 00:22:28.261 "trtype": "RDMA", 00:22:28.261 "adrfam": "IPv4", 00:22:28.261 "traddr": "192.168.100.8", 00:22:28.261 "trsvcid": "4420" 00:22:28.261 }, 00:22:28.261 "peer_address": { 00:22:28.261 "trtype": "RDMA", 00:22:28.261 "adrfam": "IPv4", 00:22:28.261 "traddr": "192.168.100.8", 00:22:28.261 "trsvcid": "56521" 00:22:28.261 }, 00:22:28.261 "auth": { 00:22:28.261 "state": "completed", 00:22:28.261 "digest": "sha384", 00:22:28.261 "dhgroup": "ffdhe3072" 00:22:28.261 } 00:22:28.261 } 00:22:28.261 ]' 00:22:28.261 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.261 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:28.261 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.261 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:28.261 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.520 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.520 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.520 20:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.520 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:29.089 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.348 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.606 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.606 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.606 20:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.606 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.864 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.864 { 00:22:29.864 "cntlid": 73, 00:22:29.864 "qid": 0, 00:22:29.864 "state": "enabled", 00:22:29.864 "thread": "nvmf_tgt_poll_group_000", 00:22:29.864 "listen_address": { 00:22:29.864 "trtype": "RDMA", 00:22:29.864 "adrfam": "IPv4", 00:22:29.865 "traddr": "192.168.100.8", 00:22:29.865 "trsvcid": "4420" 00:22:29.865 }, 00:22:29.865 "peer_address": { 00:22:29.865 "trtype": "RDMA", 00:22:29.865 "adrfam": "IPv4", 00:22:29.865 "traddr": "192.168.100.8", 00:22:29.865 "trsvcid": "40115" 00:22:29.865 }, 00:22:29.865 "auth": { 00:22:29.865 "state": "completed", 00:22:29.865 "digest": "sha384", 00:22:29.865 "dhgroup": "ffdhe4096" 00:22:29.865 } 00:22:29.865 } 00:22:29.865 ]' 00:22:29.865 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.865 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.865 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.123 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:30.123 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.123 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.123 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.123 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.123 20:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:31.060 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:31.061 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.061 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.061 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.061 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.061 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.061 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.061 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.320 00:22:31.320 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:31.320 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.320 20:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.579 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.579 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.579 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.579 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.579 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.579 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.579 { 00:22:31.579 "cntlid": 75, 00:22:31.579 "qid": 0, 00:22:31.579 "state": "enabled", 00:22:31.579 "thread": "nvmf_tgt_poll_group_000", 00:22:31.579 "listen_address": { 00:22:31.579 "trtype": "RDMA", 00:22:31.579 "adrfam": "IPv4", 00:22:31.579 "traddr": "192.168.100.8", 00:22:31.579 "trsvcid": "4420" 00:22:31.579 }, 00:22:31.579 "peer_address": { 00:22:31.579 "trtype": "RDMA", 00:22:31.579 "adrfam": "IPv4", 00:22:31.579 "traddr": "192.168.100.8", 00:22:31.579 "trsvcid": "52673" 00:22:31.579 }, 00:22:31.579 "auth": { 00:22:31.579 "state": "completed", 00:22:31.579 "digest": "sha384", 00:22:31.579 "dhgroup": "ffdhe4096" 00:22:31.579 } 00:22:31.579 } 00:22:31.579 ]' 00:22:31.579 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.580 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.580 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.580 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:31.580 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.839 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.839 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.839 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.839 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:22:32.407 20:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.666 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:32.666 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.666 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.666 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.666 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.666 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:32.666 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.925 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.184 00:22:33.184 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.184 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.184 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.184 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.184 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.184 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.184 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.443 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.443 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.443 { 00:22:33.443 "cntlid": 77, 00:22:33.443 "qid": 0, 00:22:33.443 "state": "enabled", 00:22:33.443 "thread": "nvmf_tgt_poll_group_000", 00:22:33.443 "listen_address": { 00:22:33.443 "trtype": "RDMA", 00:22:33.443 "adrfam": "IPv4", 00:22:33.443 "traddr": "192.168.100.8", 00:22:33.443 "trsvcid": "4420" 00:22:33.443 }, 00:22:33.443 "peer_address": { 00:22:33.443 "trtype": "RDMA", 00:22:33.443 "adrfam": "IPv4", 00:22:33.443 "traddr": "192.168.100.8", 00:22:33.443 "trsvcid": "36583" 00:22:33.443 }, 00:22:33.443 "auth": { 00:22:33.443 "state": "completed", 00:22:33.443 "digest": "sha384", 00:22:33.443 "dhgroup": "ffdhe4096" 00:22:33.443 } 00:22:33.443 } 00:22:33.443 ]' 00:22:33.444 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.444 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:33.444 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.444 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:33.444 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.444 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.444 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.444 20:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.702 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:22:34.271 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.271 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:34.271 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.271 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.271 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.271 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:34.271 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.271 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.531 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.532 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.532 20:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.791 00:22:34.791 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:34.791 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:34.791 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.050 { 00:22:35.050 "cntlid": 79, 00:22:35.050 "qid": 0, 00:22:35.050 "state": "enabled", 00:22:35.050 "thread": "nvmf_tgt_poll_group_000", 00:22:35.050 "listen_address": { 00:22:35.050 "trtype": "RDMA", 00:22:35.050 "adrfam": "IPv4", 00:22:35.050 "traddr": "192.168.100.8", 00:22:35.050 "trsvcid": "4420" 00:22:35.050 }, 00:22:35.050 "peer_address": { 00:22:35.050 "trtype": "RDMA", 00:22:35.050 "adrfam": "IPv4", 00:22:35.050 "traddr": "192.168.100.8", 00:22:35.050 "trsvcid": "51503" 00:22:35.050 }, 00:22:35.050 "auth": { 00:22:35.050 "state": "completed", 00:22:35.050 "digest": "sha384", 00:22:35.050 "dhgroup": "ffdhe4096" 00:22:35.050 } 00:22:35.050 } 00:22:35.050 ]' 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.050 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.308 20:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:35.876 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.135 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:22:36.135 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.135 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.136 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.394 00:22:36.394 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.394 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.394 20:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.651 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.651 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.652 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.652 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.652 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.652 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.652 { 00:22:36.652 "cntlid": 81, 00:22:36.652 "qid": 0, 00:22:36.652 "state": "enabled", 00:22:36.652 "thread": "nvmf_tgt_poll_group_000", 00:22:36.652 "listen_address": { 00:22:36.652 "trtype": "RDMA", 00:22:36.652 "adrfam": "IPv4", 00:22:36.652 "traddr": "192.168.100.8", 00:22:36.652 "trsvcid": "4420" 00:22:36.652 }, 00:22:36.652 "peer_address": { 00:22:36.652 "trtype": "RDMA", 00:22:36.652 "adrfam": "IPv4", 00:22:36.652 "traddr": "192.168.100.8", 00:22:36.652 "trsvcid": "51491" 00:22:36.652 }, 00:22:36.652 "auth": { 00:22:36.652 "state": "completed", 00:22:36.652 "digest": "sha384", 00:22:36.652 "dhgroup": "ffdhe6144" 00:22:36.652 } 00:22:36.652 } 00:22:36.652 ]' 00:22:36.652 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.652 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:36.652 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.937 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:36.937 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.937 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.937 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.937 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.937 20:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:37.506 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.765 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:37.765 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.765 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.765 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.765 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:37.765 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:37.765 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:37.765 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.024 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.282 00:22:38.282 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.282 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.282 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.541 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.541 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.541 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.541 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:38.542 { 00:22:38.542 "cntlid": 83, 00:22:38.542 "qid": 0, 00:22:38.542 "state": "enabled", 00:22:38.542 "thread": "nvmf_tgt_poll_group_000", 00:22:38.542 "listen_address": { 00:22:38.542 "trtype": "RDMA", 00:22:38.542 "adrfam": "IPv4", 00:22:38.542 "traddr": "192.168.100.8", 00:22:38.542 "trsvcid": "4420" 00:22:38.542 }, 00:22:38.542 "peer_address": { 00:22:38.542 "trtype": "RDMA", 00:22:38.542 "adrfam": "IPv4", 00:22:38.542 "traddr": "192.168.100.8", 00:22:38.542 "trsvcid": "55271" 00:22:38.542 }, 00:22:38.542 "auth": { 00:22:38.542 "state": "completed", 00:22:38.542 "digest": "sha384", 00:22:38.542 "dhgroup": "ffdhe6144" 00:22:38.542 } 00:22:38.542 } 00:22:38.542 ]' 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.542 20:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.801 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:22:39.371 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.371 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:39.371 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.371 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.371 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.371 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:39.371 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:39.371 20:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.632 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.891 00:22:39.891 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.891 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.891 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.150 { 00:22:40.150 "cntlid": 85, 00:22:40.150 "qid": 0, 00:22:40.150 "state": "enabled", 00:22:40.150 "thread": "nvmf_tgt_poll_group_000", 00:22:40.150 "listen_address": { 00:22:40.150 "trtype": "RDMA", 00:22:40.150 "adrfam": "IPv4", 00:22:40.150 "traddr": "192.168.100.8", 00:22:40.150 "trsvcid": "4420" 00:22:40.150 }, 00:22:40.150 "peer_address": { 00:22:40.150 "trtype": "RDMA", 00:22:40.150 "adrfam": "IPv4", 00:22:40.150 "traddr": "192.168.100.8", 00:22:40.150 "trsvcid": "57506" 00:22:40.150 }, 00:22:40.150 "auth": { 00:22:40.150 "state": "completed", 00:22:40.150 "digest": "sha384", 00:22:40.150 "dhgroup": "ffdhe6144" 00:22:40.150 } 00:22:40.150 } 00:22:40.150 ]' 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.150 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.409 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:40.409 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.409 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.410 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.410 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.410 20:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:22:40.979 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.240 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:41.240 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.240 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.240 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.240 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:41.240 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.240 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:41.500 20:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:41.760 00:22:41.760 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:41.760 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:41.760 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.019 { 00:22:42.019 "cntlid": 87, 00:22:42.019 "qid": 0, 00:22:42.019 "state": "enabled", 00:22:42.019 "thread": "nvmf_tgt_poll_group_000", 00:22:42.019 "listen_address": { 00:22:42.019 "trtype": "RDMA", 00:22:42.019 "adrfam": "IPv4", 00:22:42.019 "traddr": "192.168.100.8", 00:22:42.019 "trsvcid": "4420" 00:22:42.019 }, 00:22:42.019 "peer_address": { 00:22:42.019 "trtype": "RDMA", 00:22:42.019 "adrfam": "IPv4", 00:22:42.019 "traddr": "192.168.100.8", 00:22:42.019 "trsvcid": "41931" 00:22:42.019 }, 00:22:42.019 "auth": { 00:22:42.019 "state": "completed", 00:22:42.019 "digest": "sha384", 00:22:42.019 "dhgroup": "ffdhe6144" 00:22:42.019 } 00:22:42.019 } 00:22:42.019 ]' 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.019 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.278 20:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:42.846 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.105 20:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.674 00:22:43.674 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.674 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.674 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.932 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.932 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.932 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.932 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.932 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.932 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.932 { 00:22:43.932 "cntlid": 89, 00:22:43.932 "qid": 0, 00:22:43.932 "state": "enabled", 00:22:43.932 "thread": "nvmf_tgt_poll_group_000", 00:22:43.932 "listen_address": { 00:22:43.932 "trtype": "RDMA", 00:22:43.932 "adrfam": "IPv4", 00:22:43.932 "traddr": "192.168.100.8", 00:22:43.932 "trsvcid": "4420" 00:22:43.932 }, 00:22:43.932 "peer_address": { 00:22:43.933 "trtype": "RDMA", 00:22:43.933 "adrfam": "IPv4", 00:22:43.933 "traddr": "192.168.100.8", 00:22:43.933 "trsvcid": "35165" 00:22:43.933 }, 00:22:43.933 "auth": { 00:22:43.933 "state": "completed", 00:22:43.933 "digest": "sha384", 00:22:43.933 "dhgroup": "ffdhe8192" 00:22:43.933 } 00:22:43.933 } 00:22:43.933 ]' 00:22:43.933 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.933 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.933 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.933 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:43.933 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.933 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.933 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.933 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.191 20:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:44.755 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.755 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:44.755 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.755 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.755 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.755 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.755 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:44.755 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.012 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.579 00:22:45.579 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.580 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.580 20:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.580 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.580 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.580 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.580 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.580 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.839 { 00:22:45.839 "cntlid": 91, 00:22:45.839 "qid": 0, 00:22:45.839 "state": "enabled", 00:22:45.839 "thread": "nvmf_tgt_poll_group_000", 00:22:45.839 "listen_address": { 00:22:45.839 "trtype": "RDMA", 00:22:45.839 "adrfam": "IPv4", 00:22:45.839 "traddr": "192.168.100.8", 00:22:45.839 "trsvcid": "4420" 00:22:45.839 }, 00:22:45.839 "peer_address": { 00:22:45.839 "trtype": "RDMA", 00:22:45.839 "adrfam": "IPv4", 00:22:45.839 "traddr": "192.168.100.8", 00:22:45.839 "trsvcid": "53236" 00:22:45.839 }, 00:22:45.839 "auth": { 00:22:45.839 "state": "completed", 00:22:45.839 "digest": "sha384", 00:22:45.839 "dhgroup": "ffdhe8192" 00:22:45.839 } 00:22:45.839 } 00:22:45.839 ]' 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.839 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.096 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:22:46.661 20:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.661 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:46.661 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.661 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.661 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.661 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:46.661 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:46.661 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.920 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.178 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.437 { 00:22:47.437 "cntlid": 93, 00:22:47.437 "qid": 0, 00:22:47.437 "state": "enabled", 00:22:47.437 "thread": "nvmf_tgt_poll_group_000", 00:22:47.437 "listen_address": { 00:22:47.437 "trtype": "RDMA", 00:22:47.437 "adrfam": "IPv4", 00:22:47.437 "traddr": "192.168.100.8", 00:22:47.437 "trsvcid": "4420" 00:22:47.437 }, 00:22:47.437 "peer_address": { 00:22:47.437 "trtype": "RDMA", 00:22:47.437 "adrfam": "IPv4", 00:22:47.437 "traddr": "192.168.100.8", 00:22:47.437 "trsvcid": "54912" 00:22:47.437 }, 00:22:47.437 "auth": { 00:22:47.437 "state": "completed", 00:22:47.437 "digest": "sha384", 00:22:47.437 "dhgroup": "ffdhe8192" 00:22:47.437 } 00:22:47.437 } 00:22:47.437 ]' 00:22:47.437 20:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.696 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:47.696 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.696 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:47.696 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.696 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.696 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.696 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.955 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:22:48.521 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.521 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:48.521 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.521 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.521 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.521 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.521 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:48.521 20:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:48.779 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:49.038 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.297 { 00:22:49.297 "cntlid": 95, 00:22:49.297 "qid": 0, 00:22:49.297 "state": "enabled", 00:22:49.297 "thread": "nvmf_tgt_poll_group_000", 00:22:49.297 "listen_address": { 00:22:49.297 "trtype": "RDMA", 00:22:49.297 "adrfam": "IPv4", 00:22:49.297 "traddr": "192.168.100.8", 00:22:49.297 "trsvcid": "4420" 00:22:49.297 }, 00:22:49.297 "peer_address": { 00:22:49.297 "trtype": "RDMA", 00:22:49.297 "adrfam": "IPv4", 00:22:49.297 "traddr": "192.168.100.8", 00:22:49.297 "trsvcid": "57293" 00:22:49.297 }, 00:22:49.297 "auth": { 00:22:49.297 "state": "completed", 00:22:49.297 "digest": "sha384", 00:22:49.297 "dhgroup": "ffdhe8192" 00:22:49.297 } 00:22:49.297 } 00:22:49.297 ]' 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.297 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.556 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.556 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.556 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:49.556 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.556 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.556 20:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.816 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:50.383 20:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.641 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.900 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.900 { 00:22:50.900 "cntlid": 97, 00:22:50.900 "qid": 0, 00:22:50.900 "state": "enabled", 00:22:50.900 "thread": "nvmf_tgt_poll_group_000", 00:22:50.900 "listen_address": { 00:22:50.900 "trtype": "RDMA", 00:22:50.900 "adrfam": "IPv4", 00:22:50.900 "traddr": "192.168.100.8", 00:22:50.900 "trsvcid": "4420" 00:22:50.900 }, 00:22:50.900 "peer_address": { 00:22:50.900 "trtype": "RDMA", 00:22:50.900 "adrfam": "IPv4", 00:22:50.900 "traddr": "192.168.100.8", 00:22:50.900 "trsvcid": "40056" 00:22:50.900 }, 00:22:50.900 "auth": { 00:22:50.900 "state": "completed", 00:22:50.900 "digest": "sha512", 00:22:50.900 "dhgroup": "null" 00:22:50.900 } 00:22:50.900 } 00:22:50.900 ]' 00:22:50.900 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:51.159 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.159 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.159 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:51.159 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.159 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.159 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.159 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.418 20:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:51.986 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.986 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:51.986 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.986 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.986 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.986 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.986 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:51.986 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.249 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.552 00:22:52.552 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.552 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.552 20:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.552 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.552 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.552 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.552 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.552 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.552 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.552 { 00:22:52.552 "cntlid": 99, 00:22:52.552 "qid": 0, 00:22:52.552 "state": "enabled", 00:22:52.552 "thread": "nvmf_tgt_poll_group_000", 00:22:52.552 "listen_address": { 00:22:52.552 "trtype": "RDMA", 00:22:52.552 "adrfam": "IPv4", 00:22:52.552 "traddr": "192.168.100.8", 00:22:52.552 "trsvcid": "4420" 00:22:52.552 }, 00:22:52.552 "peer_address": { 00:22:52.552 "trtype": "RDMA", 00:22:52.552 "adrfam": "IPv4", 00:22:52.552 "traddr": "192.168.100.8", 00:22:52.552 "trsvcid": "52576" 00:22:52.552 }, 00:22:52.552 "auth": { 00:22:52.552 "state": "completed", 00:22:52.552 "digest": "sha512", 00:22:52.552 "dhgroup": "null" 00:22:52.552 } 00:22:52.552 } 00:22:52.552 ]' 00:22:52.810 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.810 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.810 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:52.810 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:52.810 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:52.811 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.811 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.811 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.068 20:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:22:53.636 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.636 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:53.636 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.636 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.636 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.636 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.636 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:53.636 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.896 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.155 00:22:54.155 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:54.155 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:54.155 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:54.414 { 00:22:54.414 "cntlid": 101, 00:22:54.414 "qid": 0, 00:22:54.414 "state": "enabled", 00:22:54.414 "thread": "nvmf_tgt_poll_group_000", 00:22:54.414 "listen_address": { 00:22:54.414 "trtype": "RDMA", 00:22:54.414 "adrfam": "IPv4", 00:22:54.414 "traddr": "192.168.100.8", 00:22:54.414 "trsvcid": "4420" 00:22:54.414 }, 00:22:54.414 "peer_address": { 00:22:54.414 "trtype": "RDMA", 00:22:54.414 "adrfam": "IPv4", 00:22:54.414 "traddr": "192.168.100.8", 00:22:54.414 "trsvcid": "57878" 00:22:54.414 }, 00:22:54.414 "auth": { 00:22:54.414 "state": "completed", 00:22:54.414 "digest": "sha512", 00:22:54.414 "dhgroup": "null" 00:22:54.414 } 00:22:54.414 } 00:22:54.414 ]' 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:54.414 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.415 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.415 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.415 20:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.674 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:22:55.241 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.241 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:55.241 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.241 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.241 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.241 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.241 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:55.241 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:55.500 20:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:55.759 00:22:55.759 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.759 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.759 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.017 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.017 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.017 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.017 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.017 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.017 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.017 { 00:22:56.017 "cntlid": 103, 00:22:56.017 "qid": 0, 00:22:56.017 "state": "enabled", 00:22:56.017 "thread": "nvmf_tgt_poll_group_000", 00:22:56.017 "listen_address": { 00:22:56.017 "trtype": "RDMA", 00:22:56.017 "adrfam": "IPv4", 00:22:56.017 "traddr": "192.168.100.8", 00:22:56.017 "trsvcid": "4420" 00:22:56.017 }, 00:22:56.017 "peer_address": { 00:22:56.017 "trtype": "RDMA", 00:22:56.018 "adrfam": "IPv4", 00:22:56.018 "traddr": "192.168.100.8", 00:22:56.018 "trsvcid": "37525" 00:22:56.018 }, 00:22:56.018 "auth": { 00:22:56.018 "state": "completed", 00:22:56.018 "digest": "sha512", 00:22:56.018 "dhgroup": "null" 00:22:56.018 } 00:22:56.018 } 00:22:56.018 ]' 00:22:56.018 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.018 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.018 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.018 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:56.018 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.018 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.018 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.018 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.277 20:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:22:56.845 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.104 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.364 00:22:57.364 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:57.364 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.364 20:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:57.623 { 00:22:57.623 "cntlid": 105, 00:22:57.623 "qid": 0, 00:22:57.623 "state": "enabled", 00:22:57.623 "thread": "nvmf_tgt_poll_group_000", 00:22:57.623 "listen_address": { 00:22:57.623 "trtype": "RDMA", 00:22:57.623 "adrfam": "IPv4", 00:22:57.623 "traddr": "192.168.100.8", 00:22:57.623 "trsvcid": "4420" 00:22:57.623 }, 00:22:57.623 "peer_address": { 00:22:57.623 "trtype": "RDMA", 00:22:57.623 "adrfam": "IPv4", 00:22:57.623 "traddr": "192.168.100.8", 00:22:57.623 "trsvcid": "43562" 00:22:57.623 }, 00:22:57.623 "auth": { 00:22:57.623 "state": "completed", 00:22:57.623 "digest": "sha512", 00:22:57.623 "dhgroup": "ffdhe2048" 00:22:57.623 } 00:22:57.623 } 00:22:57.623 ]' 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.623 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.882 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:22:58.450 20:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:58.709 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:58.710 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:58.710 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.710 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.710 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.710 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.710 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.710 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.710 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.969 00:22:58.969 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:58.969 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.969 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.228 { 00:22:59.228 "cntlid": 107, 00:22:59.228 "qid": 0, 00:22:59.228 "state": "enabled", 00:22:59.228 "thread": "nvmf_tgt_poll_group_000", 00:22:59.228 "listen_address": { 00:22:59.228 "trtype": "RDMA", 00:22:59.228 "adrfam": "IPv4", 00:22:59.228 "traddr": "192.168.100.8", 00:22:59.228 "trsvcid": "4420" 00:22:59.228 }, 00:22:59.228 "peer_address": { 00:22:59.228 "trtype": "RDMA", 00:22:59.228 "adrfam": "IPv4", 00:22:59.228 "traddr": "192.168.100.8", 00:22:59.228 "trsvcid": "59656" 00:22:59.228 }, 00:22:59.228 "auth": { 00:22:59.228 "state": "completed", 00:22:59.228 "digest": "sha512", 00:22:59.228 "dhgroup": "ffdhe2048" 00:22:59.228 } 00:22:59.228 } 00:22:59.228 ]' 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:59.228 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.487 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.487 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.487 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.488 20:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:23:00.056 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.316 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.575 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.575 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.575 20:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.575 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.834 { 00:23:00.834 "cntlid": 109, 00:23:00.834 "qid": 0, 00:23:00.834 "state": "enabled", 00:23:00.834 "thread": "nvmf_tgt_poll_group_000", 00:23:00.834 "listen_address": { 00:23:00.834 "trtype": "RDMA", 00:23:00.834 "adrfam": "IPv4", 00:23:00.834 "traddr": "192.168.100.8", 00:23:00.834 "trsvcid": "4420" 00:23:00.834 }, 00:23:00.834 "peer_address": { 00:23:00.834 "trtype": "RDMA", 00:23:00.834 "adrfam": "IPv4", 00:23:00.834 "traddr": "192.168.100.8", 00:23:00.834 "trsvcid": "34020" 00:23:00.834 }, 00:23:00.834 "auth": { 00:23:00.834 "state": "completed", 00:23:00.834 "digest": "sha512", 00:23:00.834 "dhgroup": "ffdhe2048" 00:23:00.834 } 00:23:00.834 } 00:23:00.834 ]' 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.834 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.093 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:01.093 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.093 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.093 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.093 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.093 20:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:23:01.662 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.922 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:01.922 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.922 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.922 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.922 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:01.922 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:01.922 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.182 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.182 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.442 { 00:23:02.442 "cntlid": 111, 00:23:02.442 "qid": 0, 00:23:02.442 "state": "enabled", 00:23:02.442 "thread": "nvmf_tgt_poll_group_000", 00:23:02.442 "listen_address": { 00:23:02.442 "trtype": "RDMA", 00:23:02.442 "adrfam": "IPv4", 00:23:02.442 "traddr": "192.168.100.8", 00:23:02.442 "trsvcid": "4420" 00:23:02.442 }, 00:23:02.442 "peer_address": { 00:23:02.442 "trtype": "RDMA", 00:23:02.442 "adrfam": "IPv4", 00:23:02.442 "traddr": "192.168.100.8", 00:23:02.442 "trsvcid": "40087" 00:23:02.442 }, 00:23:02.442 "auth": { 00:23:02.442 "state": "completed", 00:23:02.442 "digest": "sha512", 00:23:02.442 "dhgroup": "ffdhe2048" 00:23:02.442 } 00:23:02.442 } 00:23:02.442 ]' 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.442 20:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.701 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:02.701 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.701 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.701 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.701 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.702 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:23:03.269 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.529 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:03.529 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.529 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.529 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.529 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.529 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.529 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:03.529 20:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:03.788 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:23:03.788 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.788 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:03.788 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:03.788 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:03.788 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.789 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:03.789 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.048 { 00:23:04.048 "cntlid": 113, 00:23:04.048 "qid": 0, 00:23:04.048 "state": "enabled", 00:23:04.048 "thread": "nvmf_tgt_poll_group_000", 00:23:04.048 "listen_address": { 00:23:04.048 "trtype": "RDMA", 00:23:04.048 "adrfam": "IPv4", 00:23:04.048 "traddr": "192.168.100.8", 00:23:04.048 "trsvcid": "4420" 00:23:04.048 }, 00:23:04.048 "peer_address": { 00:23:04.048 "trtype": "RDMA", 00:23:04.048 "adrfam": "IPv4", 00:23:04.048 "traddr": "192.168.100.8", 00:23:04.048 "trsvcid": "46282" 00:23:04.048 }, 00:23:04.048 "auth": { 00:23:04.048 "state": "completed", 00:23:04.048 "digest": "sha512", 00:23:04.048 "dhgroup": "ffdhe3072" 00:23:04.048 } 00:23:04.048 } 00:23:04.048 ]' 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.048 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.308 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:04.308 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.308 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.308 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.308 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.308 20:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:23:04.877 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.137 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:05.137 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.137 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.137 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.137 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.137 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:05.137 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:05.395 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.396 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.655 00:23:05.655 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:05.655 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:05.655 20:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.655 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.655 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.655 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.655 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.655 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.655 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:05.655 { 00:23:05.655 "cntlid": 115, 00:23:05.655 "qid": 0, 00:23:05.655 "state": "enabled", 00:23:05.655 "thread": "nvmf_tgt_poll_group_000", 00:23:05.655 "listen_address": { 00:23:05.655 "trtype": "RDMA", 00:23:05.655 "adrfam": "IPv4", 00:23:05.655 "traddr": "192.168.100.8", 00:23:05.655 "trsvcid": "4420" 00:23:05.655 }, 00:23:05.655 "peer_address": { 00:23:05.655 "trtype": "RDMA", 00:23:05.655 "adrfam": "IPv4", 00:23:05.655 "traddr": "192.168.100.8", 00:23:05.655 "trsvcid": "37737" 00:23:05.655 }, 00:23:05.655 "auth": { 00:23:05.655 "state": "completed", 00:23:05.655 "digest": "sha512", 00:23:05.655 "dhgroup": "ffdhe3072" 00:23:05.655 } 00:23:05.655 } 00:23:05.655 ]' 00:23:05.655 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:05.655 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.952 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:05.952 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:05.952 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:05.952 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.952 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.952 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.952 20:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:23:06.552 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:06.811 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.071 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.071 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.071 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.071 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.071 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.071 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.071 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.330 { 00:23:07.330 "cntlid": 117, 00:23:07.330 "qid": 0, 00:23:07.330 "state": "enabled", 00:23:07.330 "thread": "nvmf_tgt_poll_group_000", 00:23:07.330 "listen_address": { 00:23:07.330 "trtype": "RDMA", 00:23:07.330 "adrfam": "IPv4", 00:23:07.330 "traddr": "192.168.100.8", 00:23:07.330 "trsvcid": "4420" 00:23:07.330 }, 00:23:07.330 "peer_address": { 00:23:07.330 "trtype": "RDMA", 00:23:07.330 "adrfam": "IPv4", 00:23:07.330 "traddr": "192.168.100.8", 00:23:07.330 "trsvcid": "44798" 00:23:07.330 }, 00:23:07.330 "auth": { 00:23:07.330 "state": "completed", 00:23:07.330 "digest": "sha512", 00:23:07.330 "dhgroup": "ffdhe3072" 00:23:07.330 } 00:23:07.330 } 00:23:07.330 ]' 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.330 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.589 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:07.589 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.589 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.589 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.589 20:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.589 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:23:08.157 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.416 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:08.416 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.416 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.416 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.416 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.416 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:08.416 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.676 20:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.935 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:08.935 { 00:23:08.935 "cntlid": 119, 00:23:08.935 "qid": 0, 00:23:08.935 "state": "enabled", 00:23:08.935 "thread": "nvmf_tgt_poll_group_000", 00:23:08.935 "listen_address": { 00:23:08.935 "trtype": "RDMA", 00:23:08.935 "adrfam": "IPv4", 00:23:08.935 "traddr": "192.168.100.8", 00:23:08.935 "trsvcid": "4420" 00:23:08.935 }, 00:23:08.935 "peer_address": { 00:23:08.935 "trtype": "RDMA", 00:23:08.935 "adrfam": "IPv4", 00:23:08.935 "traddr": "192.168.100.8", 00:23:08.935 "trsvcid": "46285" 00:23:08.935 }, 00:23:08.935 "auth": { 00:23:08.935 "state": "completed", 00:23:08.935 "digest": "sha512", 00:23:08.935 "dhgroup": "ffdhe3072" 00:23:08.935 } 00:23:08.935 } 00:23:08.935 ]' 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.935 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.195 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:09.195 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.195 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.195 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.195 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.195 20:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.132 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.391 00:23:10.391 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:10.391 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:10.391 20:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.651 { 00:23:10.651 "cntlid": 121, 00:23:10.651 "qid": 0, 00:23:10.651 "state": "enabled", 00:23:10.651 "thread": "nvmf_tgt_poll_group_000", 00:23:10.651 "listen_address": { 00:23:10.651 "trtype": "RDMA", 00:23:10.651 "adrfam": "IPv4", 00:23:10.651 "traddr": "192.168.100.8", 00:23:10.651 "trsvcid": "4420" 00:23:10.651 }, 00:23:10.651 "peer_address": { 00:23:10.651 "trtype": "RDMA", 00:23:10.651 "adrfam": "IPv4", 00:23:10.651 "traddr": "192.168.100.8", 00:23:10.651 "trsvcid": "55369" 00:23:10.651 }, 00:23:10.651 "auth": { 00:23:10.651 "state": "completed", 00:23:10.651 "digest": "sha512", 00:23:10.651 "dhgroup": "ffdhe4096" 00:23:10.651 } 00:23:10.651 } 00:23:10.651 ]' 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:10.651 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.910 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.910 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.910 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.910 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:23:11.478 20:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.737 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:11.737 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.737 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.737 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.737 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.737 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.737 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.996 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.997 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.997 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.256 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:12.256 { 00:23:12.256 "cntlid": 123, 00:23:12.256 "qid": 0, 00:23:12.256 "state": "enabled", 00:23:12.256 "thread": "nvmf_tgt_poll_group_000", 00:23:12.256 "listen_address": { 00:23:12.256 "trtype": "RDMA", 00:23:12.256 "adrfam": "IPv4", 00:23:12.256 "traddr": "192.168.100.8", 00:23:12.256 "trsvcid": "4420" 00:23:12.256 }, 00:23:12.256 "peer_address": { 00:23:12.256 "trtype": "RDMA", 00:23:12.256 "adrfam": "IPv4", 00:23:12.256 "traddr": "192.168.100.8", 00:23:12.256 "trsvcid": "40529" 00:23:12.256 }, 00:23:12.256 "auth": { 00:23:12.256 "state": "completed", 00:23:12.256 "digest": "sha512", 00:23:12.256 "dhgroup": "ffdhe4096" 00:23:12.256 } 00:23:12.256 } 00:23:12.256 ]' 00:23:12.256 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:12.516 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.516 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:12.516 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:12.516 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:12.516 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.516 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.516 20:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.775 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:23:13.343 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.343 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:13.343 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.343 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.343 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.343 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.343 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.343 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.602 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:23:13.602 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.602 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:13.602 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:13.602 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:13.602 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.603 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.603 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.603 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.603 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.603 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.603 20:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.862 00:23:13.862 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.862 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.862 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.121 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:14.122 { 00:23:14.122 "cntlid": 125, 00:23:14.122 "qid": 0, 00:23:14.122 "state": "enabled", 00:23:14.122 "thread": "nvmf_tgt_poll_group_000", 00:23:14.122 "listen_address": { 00:23:14.122 "trtype": "RDMA", 00:23:14.122 "adrfam": "IPv4", 00:23:14.122 "traddr": "192.168.100.8", 00:23:14.122 "trsvcid": "4420" 00:23:14.122 }, 00:23:14.122 "peer_address": { 00:23:14.122 "trtype": "RDMA", 00:23:14.122 "adrfam": "IPv4", 00:23:14.122 "traddr": "192.168.100.8", 00:23:14.122 "trsvcid": "56792" 00:23:14.122 }, 00:23:14.122 "auth": { 00:23:14.122 "state": "completed", 00:23:14.122 "digest": "sha512", 00:23:14.122 "dhgroup": "ffdhe4096" 00:23:14.122 } 00:23:14.122 } 00:23:14.122 ]' 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.122 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.381 20:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:23:14.949 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.949 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:14.949 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.949 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.949 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.949 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.949 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:14.949 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.207 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.466 00:23:15.466 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.466 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.466 20:44:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.724 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.724 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.725 { 00:23:15.725 "cntlid": 127, 00:23:15.725 "qid": 0, 00:23:15.725 "state": "enabled", 00:23:15.725 "thread": "nvmf_tgt_poll_group_000", 00:23:15.725 "listen_address": { 00:23:15.725 "trtype": "RDMA", 00:23:15.725 "adrfam": "IPv4", 00:23:15.725 "traddr": "192.168.100.8", 00:23:15.725 "trsvcid": "4420" 00:23:15.725 }, 00:23:15.725 "peer_address": { 00:23:15.725 "trtype": "RDMA", 00:23:15.725 "adrfam": "IPv4", 00:23:15.725 "traddr": "192.168.100.8", 00:23:15.725 "trsvcid": "40240" 00:23:15.725 }, 00:23:15.725 "auth": { 00:23:15.725 "state": "completed", 00:23:15.725 "digest": "sha512", 00:23:15.725 "dhgroup": "ffdhe4096" 00:23:15.725 } 00:23:15.725 } 00:23:15.725 ]' 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.725 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.984 20:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:23:16.552 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.810 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.379 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.379 { 00:23:17.379 "cntlid": 129, 00:23:17.379 "qid": 0, 00:23:17.379 "state": "enabled", 00:23:17.379 "thread": "nvmf_tgt_poll_group_000", 00:23:17.379 "listen_address": { 00:23:17.379 "trtype": "RDMA", 00:23:17.379 "adrfam": "IPv4", 00:23:17.379 "traddr": "192.168.100.8", 00:23:17.379 "trsvcid": "4420" 00:23:17.379 }, 00:23:17.379 "peer_address": { 00:23:17.379 "trtype": "RDMA", 00:23:17.379 "adrfam": "IPv4", 00:23:17.379 "traddr": "192.168.100.8", 00:23:17.379 "trsvcid": "55024" 00:23:17.379 }, 00:23:17.379 "auth": { 00:23:17.379 "state": "completed", 00:23:17.379 "digest": "sha512", 00:23:17.379 "dhgroup": "ffdhe6144" 00:23:17.379 } 00:23:17.379 } 00:23:17.379 ]' 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.379 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.639 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:17.639 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.639 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.639 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.639 20:44:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.639 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:23:18.576 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.576 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:18.576 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.576 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.576 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.576 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.576 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:18.576 20:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.576 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.145 00:23:19.145 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:19.145 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:19.145 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.145 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.145 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.145 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.145 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.146 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.146 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:19.146 { 00:23:19.146 "cntlid": 131, 00:23:19.146 "qid": 0, 00:23:19.146 "state": "enabled", 00:23:19.146 "thread": "nvmf_tgt_poll_group_000", 00:23:19.146 "listen_address": { 00:23:19.146 "trtype": "RDMA", 00:23:19.146 "adrfam": "IPv4", 00:23:19.146 "traddr": "192.168.100.8", 00:23:19.146 "trsvcid": "4420" 00:23:19.146 }, 00:23:19.146 "peer_address": { 00:23:19.146 "trtype": "RDMA", 00:23:19.146 "adrfam": "IPv4", 00:23:19.146 "traddr": "192.168.100.8", 00:23:19.146 "trsvcid": "57025" 00:23:19.146 }, 00:23:19.146 "auth": { 00:23:19.146 "state": "completed", 00:23:19.146 "digest": "sha512", 00:23:19.146 "dhgroup": "ffdhe6144" 00:23:19.146 } 00:23:19.146 } 00:23:19.146 ]' 00:23:19.146 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:19.146 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.146 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:19.146 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:19.146 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:19.405 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.405 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.405 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.405 20:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:23:19.973 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.233 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:20.233 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.233 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.233 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.233 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:20.233 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.233 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.493 20:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.763 00:23:20.763 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:20.763 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.763 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.763 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:21.068 { 00:23:21.068 "cntlid": 133, 00:23:21.068 "qid": 0, 00:23:21.068 "state": "enabled", 00:23:21.068 "thread": "nvmf_tgt_poll_group_000", 00:23:21.068 "listen_address": { 00:23:21.068 "trtype": "RDMA", 00:23:21.068 "adrfam": "IPv4", 00:23:21.068 "traddr": "192.168.100.8", 00:23:21.068 "trsvcid": "4420" 00:23:21.068 }, 00:23:21.068 "peer_address": { 00:23:21.068 "trtype": "RDMA", 00:23:21.068 "adrfam": "IPv4", 00:23:21.068 "traddr": "192.168.100.8", 00:23:21.068 "trsvcid": "35260" 00:23:21.068 }, 00:23:21.068 "auth": { 00:23:21.068 "state": "completed", 00:23:21.068 "digest": "sha512", 00:23:21.068 "dhgroup": "ffdhe6144" 00:23:21.068 } 00:23:21.068 } 00:23:21.068 ]' 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.068 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.327 20:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:23:21.896 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.896 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:21.896 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.896 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.896 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.896 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.896 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.896 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:22.155 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:22.414 00:23:22.414 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:22.414 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:22.414 20:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.674 { 00:23:22.674 "cntlid": 135, 00:23:22.674 "qid": 0, 00:23:22.674 "state": "enabled", 00:23:22.674 "thread": "nvmf_tgt_poll_group_000", 00:23:22.674 "listen_address": { 00:23:22.674 "trtype": "RDMA", 00:23:22.674 "adrfam": "IPv4", 00:23:22.674 "traddr": "192.168.100.8", 00:23:22.674 "trsvcid": "4420" 00:23:22.674 }, 00:23:22.674 "peer_address": { 00:23:22.674 "trtype": "RDMA", 00:23:22.674 "adrfam": "IPv4", 00:23:22.674 "traddr": "192.168.100.8", 00:23:22.674 "trsvcid": "35869" 00:23:22.674 }, 00:23:22.674 "auth": { 00:23:22.674 "state": "completed", 00:23:22.674 "digest": "sha512", 00:23:22.674 "dhgroup": "ffdhe6144" 00:23:22.674 } 00:23:22.674 } 00:23:22.674 ]' 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.674 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.933 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:23:23.502 20:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.502 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.761 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.327 00:23:24.327 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:24.327 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.327 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:24.586 { 00:23:24.586 "cntlid": 137, 00:23:24.586 "qid": 0, 00:23:24.586 "state": "enabled", 00:23:24.586 "thread": "nvmf_tgt_poll_group_000", 00:23:24.586 "listen_address": { 00:23:24.586 "trtype": "RDMA", 00:23:24.586 "adrfam": "IPv4", 00:23:24.586 "traddr": "192.168.100.8", 00:23:24.586 "trsvcid": "4420" 00:23:24.586 }, 00:23:24.586 "peer_address": { 00:23:24.586 "trtype": "RDMA", 00:23:24.586 "adrfam": "IPv4", 00:23:24.586 "traddr": "192.168.100.8", 00:23:24.586 "trsvcid": "38423" 00:23:24.586 }, 00:23:24.586 "auth": { 00:23:24.586 "state": "completed", 00:23:24.586 "digest": "sha512", 00:23:24.586 "dhgroup": "ffdhe8192" 00:23:24.586 } 00:23:24.586 } 00:23:24.586 ]' 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:24.586 20:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:24.586 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.586 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.586 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.845 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:23:25.413 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.413 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:25.413 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.413 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.413 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.413 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:25.413 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.413 20:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.673 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.241 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:26.241 { 00:23:26.241 "cntlid": 139, 00:23:26.241 "qid": 0, 00:23:26.241 "state": "enabled", 00:23:26.241 "thread": "nvmf_tgt_poll_group_000", 00:23:26.241 "listen_address": { 00:23:26.241 "trtype": "RDMA", 00:23:26.241 "adrfam": "IPv4", 00:23:26.241 "traddr": "192.168.100.8", 00:23:26.241 "trsvcid": "4420" 00:23:26.241 }, 00:23:26.241 "peer_address": { 00:23:26.241 "trtype": "RDMA", 00:23:26.241 "adrfam": "IPv4", 00:23:26.241 "traddr": "192.168.100.8", 00:23:26.241 "trsvcid": "39123" 00:23:26.241 }, 00:23:26.241 "auth": { 00:23:26.241 "state": "completed", 00:23:26.241 "digest": "sha512", 00:23:26.241 "dhgroup": "ffdhe8192" 00:23:26.241 } 00:23:26.241 } 00:23:26.241 ]' 00:23:26.241 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:26.500 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.500 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:26.500 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:26.500 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:26.500 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.500 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.500 20:44:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.759 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:MjcyZDU3NzY2NWQxMjE4NjllMzI2N2VmN2YxYzU2ZTi82iDP: --dhchap-ctrl-secret DHHC-1:02:YjExMTQ4NmNmMjEyYTAyMGIzNWMyNzhlOTZlMzExODM2OWQ4Mzc3YzVjYzNmYWM22AZEZw==: 00:23:27.329 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.329 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:27.329 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.329 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.329 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.329 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:27.329 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.329 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.588 20:44:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.154 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:28.154 { 00:23:28.154 "cntlid": 141, 00:23:28.154 "qid": 0, 00:23:28.154 "state": "enabled", 00:23:28.154 "thread": "nvmf_tgt_poll_group_000", 00:23:28.154 "listen_address": { 00:23:28.154 "trtype": "RDMA", 00:23:28.154 "adrfam": "IPv4", 00:23:28.154 "traddr": "192.168.100.8", 00:23:28.154 "trsvcid": "4420" 00:23:28.154 }, 00:23:28.154 "peer_address": { 00:23:28.154 "trtype": "RDMA", 00:23:28.154 "adrfam": "IPv4", 00:23:28.154 "traddr": "192.168.100.8", 00:23:28.154 "trsvcid": "33678" 00:23:28.154 }, 00:23:28.154 "auth": { 00:23:28.154 "state": "completed", 00:23:28.154 "digest": "sha512", 00:23:28.154 "dhgroup": "ffdhe8192" 00:23:28.154 } 00:23:28.154 } 00:23:28.154 ]' 00:23:28.154 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.413 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.413 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.413 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:28.413 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.413 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.413 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.413 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.672 20:44:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQ0NGNkN2EyMmYxZDljZTdmZTU2NDlkZjEzOTUxZmRlODE2MTJlNTljMjQzMjRib7d3hA==: --dhchap-ctrl-secret DHHC-1:01:N2IxYTdlNWIwZjgwZGFhYjVmNzE5NzNkNTAxZGI2YTYXEhTa: 00:23:29.240 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.240 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:29.240 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.240 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.240 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.240 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:29.240 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.240 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:29.499 20:44:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:30.067 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:30.067 { 00:23:30.067 "cntlid": 143, 00:23:30.067 "qid": 0, 00:23:30.067 "state": "enabled", 00:23:30.067 "thread": "nvmf_tgt_poll_group_000", 00:23:30.067 "listen_address": { 00:23:30.067 "trtype": "RDMA", 00:23:30.067 "adrfam": "IPv4", 00:23:30.067 "traddr": "192.168.100.8", 00:23:30.067 "trsvcid": "4420" 00:23:30.067 }, 00:23:30.067 "peer_address": { 00:23:30.067 "trtype": "RDMA", 00:23:30.067 "adrfam": "IPv4", 00:23:30.067 "traddr": "192.168.100.8", 00:23:30.067 "trsvcid": "54850" 00:23:30.067 }, 00:23:30.067 "auth": { 00:23:30.067 "state": "completed", 00:23:30.067 "digest": "sha512", 00:23:30.067 "dhgroup": "ffdhe8192" 00:23:30.067 } 00:23:30.067 } 00:23:30.067 ]' 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:30.067 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:30.327 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.327 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.327 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.327 20:44:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:23:30.894 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:31.153 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.412 20:44:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.670 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:31.929 { 00:23:31.929 "cntlid": 145, 00:23:31.929 "qid": 0, 00:23:31.929 "state": "enabled", 00:23:31.929 "thread": "nvmf_tgt_poll_group_000", 00:23:31.929 "listen_address": { 00:23:31.929 "trtype": "RDMA", 00:23:31.929 "adrfam": "IPv4", 00:23:31.929 "traddr": "192.168.100.8", 00:23:31.929 "trsvcid": "4420" 00:23:31.929 }, 00:23:31.929 "peer_address": { 00:23:31.929 "trtype": "RDMA", 00:23:31.929 "adrfam": "IPv4", 00:23:31.929 "traddr": "192.168.100.8", 00:23:31.929 "trsvcid": "53492" 00:23:31.929 }, 00:23:31.929 "auth": { 00:23:31.929 "state": "completed", 00:23:31.929 "digest": "sha512", 00:23:31.929 "dhgroup": "ffdhe8192" 00:23:31.929 } 00:23:31.929 } 00:23:31.929 ]' 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.929 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:32.187 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:32.188 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:32.188 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.188 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.188 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.188 20:44:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NGZiYmI1NTAzMzE3YzRiMjcyMTc3ZjM5ZTIwZDcxYjllODBjMWM3NDIyNzEwNzFmY2fuwA==: --dhchap-ctrl-secret DHHC-1:03:YzEwODZjMTBmNzJjYmRiMWI0NGU2Yjc0ZmM5NTg2NWNkNzYzZmFhMzg5ZGM3N2Y1NTNmYTA1NTgwZjk1NjMwMVQe/HE=: 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:33.125 20:44:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:05.258 request: 00:24:05.258 { 00:24:05.258 "name": "nvme0", 00:24:05.258 "trtype": "rdma", 00:24:05.258 "traddr": "192.168.100.8", 00:24:05.258 "adrfam": "ipv4", 00:24:05.258 "trsvcid": "4420", 00:24:05.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:05.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:05.258 "prchk_reftag": false, 00:24:05.258 "prchk_guard": false, 00:24:05.258 "hdgst": false, 00:24:05.258 "ddgst": false, 00:24:05.258 "dhchap_key": "key2", 00:24:05.258 "method": "bdev_nvme_attach_controller", 00:24:05.258 "req_id": 1 00:24:05.258 } 00:24:05.258 Got JSON-RPC error response 00:24:05.258 response: 00:24:05.258 { 00:24:05.258 "code": -5, 00:24:05.258 "message": "Input/output error" 00:24:05.258 } 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:05.258 request: 00:24:05.258 { 00:24:05.258 "name": "nvme0", 00:24:05.258 "trtype": "rdma", 00:24:05.258 "traddr": "192.168.100.8", 00:24:05.258 "adrfam": "ipv4", 00:24:05.258 "trsvcid": "4420", 00:24:05.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:05.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:05.258 "prchk_reftag": false, 00:24:05.258 "prchk_guard": false, 00:24:05.258 "hdgst": false, 00:24:05.258 "ddgst": false, 00:24:05.258 "dhchap_key": "key1", 00:24:05.258 "dhchap_ctrlr_key": "ckey2", 00:24:05.258 "method": "bdev_nvme_attach_controller", 00:24:05.258 "req_id": 1 00:24:05.258 } 00:24:05.258 Got JSON-RPC error response 00:24:05.258 response: 00:24:05.258 { 00:24:05.258 "code": -5, 00:24:05.258 "message": "Input/output error" 00:24:05.258 } 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.258 20:44:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.343 request: 00:24:37.343 { 00:24:37.343 "name": "nvme0", 00:24:37.343 "trtype": "rdma", 00:24:37.343 "traddr": "192.168.100.8", 00:24:37.343 "adrfam": "ipv4", 00:24:37.343 "trsvcid": "4420", 00:24:37.343 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:37.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:24:37.343 "prchk_reftag": false, 00:24:37.343 "prchk_guard": false, 00:24:37.343 "hdgst": false, 00:24:37.343 "ddgst": false, 00:24:37.343 "dhchap_key": "key1", 00:24:37.343 "dhchap_ctrlr_key": "ckey1", 00:24:37.343 "method": "bdev_nvme_attach_controller", 00:24:37.343 "req_id": 1 00:24:37.343 } 00:24:37.343 Got JSON-RPC error response 00:24:37.343 response: 00:24:37.343 { 00:24:37.343 "code": -5, 00:24:37.343 "message": "Input/output error" 00:24:37.343 } 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1149121 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1149121 ']' 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1149121 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1149121 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1149121' 00:24:37.343 killing process with pid 1149121 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1149121 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1149121 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1182342 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1182342 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1182342 ']' 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.343 20:45:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1182342 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1182342 ']' 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:24:37.343 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:37.344 20:45:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:37.344 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:37.344 { 00:24:37.344 "cntlid": 1, 00:24:37.344 "qid": 0, 00:24:37.344 "state": "enabled", 00:24:37.344 "thread": "nvmf_tgt_poll_group_000", 00:24:37.344 "listen_address": { 00:24:37.344 "trtype": "RDMA", 00:24:37.344 "adrfam": "IPv4", 00:24:37.344 "traddr": "192.168.100.8", 00:24:37.344 "trsvcid": "4420" 00:24:37.344 }, 00:24:37.344 "peer_address": { 00:24:37.344 "trtype": "RDMA", 00:24:37.344 "adrfam": "IPv4", 00:24:37.344 "traddr": "192.168.100.8", 00:24:37.344 "trsvcid": "34228" 00:24:37.344 }, 00:24:37.344 "auth": { 00:24:37.344 "state": "completed", 00:24:37.344 "digest": "sha512", 00:24:37.344 "dhgroup": "ffdhe8192" 00:24:37.344 } 00:24:37.344 } 00:24:37.344 ]' 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.344 20:45:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:OWVlNjMyNTNmNTE2Y2E2NjJiMWZjZTgyYTlmM2U1M2VlODQ2YWQ5ZWIwMTE1OTEzNDVkZWE2N2IzMTFjZTFjZlICc2U=: 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:37.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.911 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:37.912 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:38.170 20:45:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:10.254 request: 00:25:10.254 { 00:25:10.254 "name": "nvme0", 00:25:10.254 "trtype": "rdma", 00:25:10.254 "traddr": "192.168.100.8", 00:25:10.254 "adrfam": "ipv4", 00:25:10.254 "trsvcid": "4420", 00:25:10.254 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:10.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:25:10.254 "prchk_reftag": false, 00:25:10.254 "prchk_guard": false, 00:25:10.254 "hdgst": false, 00:25:10.254 "ddgst": false, 00:25:10.254 "dhchap_key": "key3", 00:25:10.254 "method": "bdev_nvme_attach_controller", 00:25:10.254 "req_id": 1 00:25:10.254 } 00:25:10.254 Got JSON-RPC error response 00:25:10.254 response: 00:25:10.254 { 00:25:10.254 "code": -5, 00:25:10.254 "message": "Input/output error" 00:25:10.254 } 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:25:10.254 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.255 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:10.255 20:45:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:42.418 request: 00:25:42.418 { 00:25:42.418 "name": "nvme0", 00:25:42.418 "trtype": "rdma", 00:25:42.418 "traddr": "192.168.100.8", 00:25:42.418 "adrfam": "ipv4", 00:25:42.418 "trsvcid": "4420", 00:25:42.418 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:42.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:25:42.418 "prchk_reftag": false, 00:25:42.418 "prchk_guard": false, 00:25:42.418 "hdgst": false, 00:25:42.418 "ddgst": false, 00:25:42.418 "dhchap_key": "key3", 00:25:42.418 "method": "bdev_nvme_attach_controller", 00:25:42.418 "req_id": 1 00:25:42.418 } 00:25:42.418 Got JSON-RPC error response 00:25:42.418 response: 00:25:42.418 { 00:25:42.418 "code": -5, 00:25:42.418 "message": "Input/output error" 00:25:42.418 } 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:42.418 request: 00:25:42.418 { 00:25:42.418 "name": "nvme0", 00:25:42.418 "trtype": "rdma", 00:25:42.418 "traddr": "192.168.100.8", 00:25:42.418 "adrfam": "ipv4", 00:25:42.418 "trsvcid": "4420", 00:25:42.418 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:42.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:25:42.418 "prchk_reftag": false, 00:25:42.418 "prchk_guard": false, 00:25:42.418 "hdgst": false, 00:25:42.418 "ddgst": false, 00:25:42.418 "dhchap_key": "key0", 00:25:42.418 "dhchap_ctrlr_key": "key1", 00:25:42.418 "method": "bdev_nvme_attach_controller", 00:25:42.418 "req_id": 1 00:25:42.418 } 00:25:42.418 Got JSON-RPC error response 00:25:42.418 response: 00:25:42.418 { 00:25:42.418 "code": -5, 00:25:42.418 "message": "Input/output error" 00:25:42.418 } 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:42.418 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:42.418 20:46:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1149169 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1149169 ']' 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1149169 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1149169 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1149169' 00:25:42.418 killing process with pid 1149169 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1149169 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1149169 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.418 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:42.419 rmmod nvme_rdma 00:25:42.419 rmmod nvme_fabrics 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1182342 ']' 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1182342 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1182342 ']' 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1182342 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1182342 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1182342' 00:25:42.419 killing process with pid 1182342 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1182342 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1182342 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Q12 /tmp/spdk.key-sha256.BCy /tmp/spdk.key-sha384.YYC /tmp/spdk.key-sha512.BjE /tmp/spdk.key-sha512.2Sw /tmp/spdk.key-sha384.ycM /tmp/spdk.key-sha256.ya7 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:25:42.419 00:25:42.419 real 4m23.787s 00:25:42.419 user 9m22.996s 00:25:42.419 sys 0m24.459s 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:42.419 20:46:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.419 ************************************ 00:25:42.419 END TEST nvmf_auth_target 00:25:42.419 ************************************ 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:42.419 ************************************ 00:25:42.419 START TEST nvmf_fuzz 00:25:42.419 ************************************ 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:25:42.419 * Looking for test storage... 00:25:42.419 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.419 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.420 20:46:29 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:48.990 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:48.990 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:48.991 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:48.991 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:48.991 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:48.991 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:48.991 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:48.991 altname enp217s0f0np0 00:25:48.991 altname ens818f0np0 00:25:48.991 inet 192.168.100.8/24 scope global mlx_0_0 00:25:48.991 valid_lft forever preferred_lft forever 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:48.991 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:48.991 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:48.991 altname enp217s0f1np1 00:25:48.991 altname ens818f1np1 00:25:48.991 inet 192.168.100.9/24 scope global mlx_0_1 00:25:48.991 valid_lft forever preferred_lft forever 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:48.991 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:48.992 192.168.100.9' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:48.992 192.168.100.9' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:48.992 192.168.100.9' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1197343 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1197343 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1197343 ']' 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.992 20:46:37 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.929 Malloc0 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:25:49.929 20:46:38 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:26:22.015 Fuzzing completed. Shutting down the fuzz application 00:26:22.015 00:26:22.015 Dumping successful admin opcodes: 00:26:22.015 8, 9, 10, 24, 00:26:22.015 Dumping successful io opcodes: 00:26:22.015 0, 9, 00:26:22.015 NS: 0x200003af1f00 I/O qp, Total commands completed: 1006462, total successful commands: 5897, random_seed: 93582784 00:26:22.015 NS: 0x200003af1f00 admin qp, Total commands completed: 129856, total successful commands: 1057, random_seed: 2264840320 00:26:22.015 20:47:08 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:22.015 Fuzzing completed. Shutting down the fuzz application 00:26:22.015 00:26:22.015 Dumping successful admin opcodes: 00:26:22.015 24, 00:26:22.015 Dumping successful io opcodes: 00:26:22.015 00:26:22.015 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3206676468 00:26:22.015 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3206756914 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:22.015 rmmod nvme_rdma 00:26:22.015 rmmod nvme_fabrics 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1197343 ']' 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1197343 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1197343 ']' 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1197343 00:26:22.015 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1197343 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1197343' 00:26:22.016 killing process with pid 1197343 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1197343 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1197343 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:22.016 00:26:22.016 real 0m41.347s 00:26:22.016 user 0m50.894s 00:26:22.016 sys 0m22.433s 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:22.016 ************************************ 00:26:22.016 END TEST nvmf_fuzz 00:26:22.016 ************************************ 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:22.016 ************************************ 00:26:22.016 START TEST nvmf_multiconnection 00:26:22.016 ************************************ 00:26:22.016 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:26:22.275 * Looking for test storage... 00:26:22.275 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.275 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:22.276 20:47:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:30.428 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:30.429 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:30.429 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:30.429 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:30.429 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:30.429 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:30.429 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:30.429 altname enp217s0f0np0 00:26:30.429 altname ens818f0np0 00:26:30.429 inet 192.168.100.8/24 scope global mlx_0_0 00:26:30.429 valid_lft forever preferred_lft forever 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:30.429 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:30.430 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:30.430 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:30.430 altname enp217s0f1np1 00:26:30.430 altname ens818f1np1 00:26:30.430 inet 192.168.100.9/24 scope global mlx_0_1 00:26:30.430 valid_lft forever preferred_lft forever 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:30.430 192.168.100.9' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:30.430 192.168.100.9' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:30.430 192.168.100.9' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1206624 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1206624 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1206624 ']' 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.430 20:47:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.430 [2024-07-26 20:47:18.335235] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:26:30.430 [2024-07-26 20:47:18.335287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.430 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.430 [2024-07-26 20:47:18.420728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:30.430 [2024-07-26 20:47:18.462501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.430 [2024-07-26 20:47:18.462542] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.430 [2024-07-26 20:47:18.462553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.430 [2024-07-26 20:47:18.462561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.430 [2024-07-26 20:47:18.462569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.430 [2024-07-26 20:47:18.462618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.430 [2024-07-26 20:47:18.462717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.430 [2024-07-26 20:47:18.462741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:30.430 [2024-07-26 20:47:18.462743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.690 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.690 [2024-07-26 20:47:19.225163] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1143ea0/0x1148390) succeed. 00:26:30.690 [2024-07-26 20:47:19.234454] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11454e0/0x1189a20) succeed. 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 Malloc1 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 [2024-07-26 20:47:19.408737] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 Malloc2 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 Malloc3 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.950 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.210 Malloc4 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.210 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 Malloc5 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 Malloc6 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 Malloc7 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 Malloc8 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.211 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 Malloc9 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 Malloc10 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:31.471 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 Malloc11 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.472 20:47:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:26:32.409 20:47:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:32.409 20:47:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:32.409 20:47:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.409 20:47:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:32.409 20:47:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:34.943 20:47:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:34.943 20:47:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:34.943 20:47:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:34.943 20:47:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:34.943 20:47:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:34.943 20:47:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:34.943 20:47:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.943 20:47:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:26:35.510 20:47:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:35.510 20:47:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:35.510 20:47:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:35.510 20:47:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:35.510 20:47:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:37.413 20:47:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:37.413 20:47:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:37.413 20:47:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:37.413 20:47:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:37.413 20:47:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:37.413 20:47:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:37.413 20:47:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:37.413 20:47:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:26:38.349 20:47:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:38.349 20:47:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:38.349 20:47:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:38.349 20:47:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:38.349 20:47:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:40.884 20:47:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:40.884 20:47:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:40.884 20:47:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:40.884 20:47:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:40.884 20:47:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:40.884 20:47:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:40.884 20:47:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:40.884 20:47:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:26:41.453 20:47:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:41.453 20:47:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:41.453 20:47:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:41.453 20:47:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:41.453 20:47:29 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:43.359 20:47:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:43.359 20:47:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:43.359 20:47:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:43.359 20:47:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:43.359 20:47:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:43.359 20:47:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:43.359 20:47:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.359 20:47:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:26:44.736 20:47:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:44.736 20:47:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:44.736 20:47:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:44.736 20:47:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:44.736 20:47:32 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:46.638 20:47:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:46.638 20:47:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:46.638 20:47:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:46.638 20:47:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:46.638 20:47:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:46.638 20:47:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:46.638 20:47:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.638 20:47:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:26:47.576 20:47:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:47.576 20:47:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:47.576 20:47:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:47.576 20:47:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:47.576 20:47:35 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:49.508 20:47:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:49.508 20:47:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:49.508 20:47:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:49.508 20:47:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:49.508 20:47:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:49.508 20:47:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:49.508 20:47:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.508 20:47:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:26:50.446 20:47:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:50.446 20:47:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:50.446 20:47:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:50.446 20:47:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:50.446 20:47:38 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:52.354 20:47:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:52.613 20:47:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:52.613 20:47:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:52.613 20:47:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:52.613 20:47:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:52.613 20:47:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:52.613 20:47:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.613 20:47:40 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:26:53.550 20:47:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:53.550 20:47:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:53.550 20:47:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.550 20:47:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:53.550 20:47:41 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:55.452 20:47:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:55.452 20:47:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:55.452 20:47:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:55.452 20:47:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:55.452 20:47:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.452 20:47:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:55.452 20:47:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.452 20:47:43 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:26:56.388 20:47:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:56.388 20:47:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:56.388 20:47:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:56.388 20:47:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:56.388 20:47:44 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:58.922 20:47:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:58.922 20:47:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:58.922 20:47:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:58.922 20:47:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:58.922 20:47:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:58.922 20:47:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:58.922 20:47:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:58.922 20:47:46 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:26:59.490 20:47:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:59.490 20:47:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:59.491 20:47:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:59.491 20:47:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:59.491 20:47:47 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:01.396 20:47:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:01.396 20:47:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:01.396 20:47:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:01.396 20:47:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:01.396 20:47:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:01.396 20:47:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:01.396 20:47:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.396 20:47:49 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:27:02.775 20:47:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:02.775 20:47:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:02.775 20:47:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:02.775 20:47:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:02.775 20:47:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:04.682 20:47:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:04.682 20:47:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:04.682 20:47:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:04.682 20:47:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:04.682 20:47:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:04.682 20:47:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:04.682 20:47:52 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:04.682 [global] 00:27:04.682 thread=1 00:27:04.682 invalidate=1 00:27:04.682 rw=read 00:27:04.682 time_based=1 00:27:04.682 runtime=10 00:27:04.682 ioengine=libaio 00:27:04.682 direct=1 00:27:04.682 bs=262144 00:27:04.682 iodepth=64 00:27:04.682 norandommap=1 00:27:04.682 numjobs=1 00:27:04.682 00:27:04.682 [job0] 00:27:04.682 filename=/dev/nvme0n1 00:27:04.682 [job1] 00:27:04.682 filename=/dev/nvme10n1 00:27:04.682 [job2] 00:27:04.682 filename=/dev/nvme1n1 00:27:04.682 [job3] 00:27:04.682 filename=/dev/nvme2n1 00:27:04.682 [job4] 00:27:04.682 filename=/dev/nvme3n1 00:27:04.682 [job5] 00:27:04.682 filename=/dev/nvme4n1 00:27:04.682 [job6] 00:27:04.682 filename=/dev/nvme5n1 00:27:04.682 [job7] 00:27:04.682 filename=/dev/nvme6n1 00:27:04.682 [job8] 00:27:04.682 filename=/dev/nvme7n1 00:27:04.682 [job9] 00:27:04.682 filename=/dev/nvme8n1 00:27:04.682 [job10] 00:27:04.682 filename=/dev/nvme9n1 00:27:04.682 Could not set queue depth (nvme0n1) 00:27:04.682 Could not set queue depth (nvme10n1) 00:27:04.682 Could not set queue depth (nvme1n1) 00:27:04.682 Could not set queue depth (nvme2n1) 00:27:04.682 Could not set queue depth (nvme3n1) 00:27:04.682 Could not set queue depth (nvme4n1) 00:27:04.683 Could not set queue depth (nvme5n1) 00:27:04.683 Could not set queue depth (nvme6n1) 00:27:04.683 Could not set queue depth (nvme7n1) 00:27:04.683 Could not set queue depth (nvme8n1) 00:27:04.683 Could not set queue depth (nvme9n1) 00:27:05.246 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.246 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.246 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:05.247 fio-3.35 00:27:05.247 Starting 11 threads 00:27:17.498 00:27:17.498 job0: (groupid=0, jobs=1): err= 0: pid=1212836: Fri Jul 26 20:48:03 2024 00:27:17.498 read: IOPS=1434, BW=359MiB/s (376MB/s)(3596MiB/10027msec) 00:27:17.498 slat (usec): min=11, max=25976, avg=688.52, stdev=1759.99 00:27:17.498 clat (msec): min=11, max=105, avg=43.88, stdev=15.03 00:27:17.498 lat (msec): min=12, max=105, avg=44.57, stdev=15.33 00:27:17.498 clat percentiles (usec): 00:27:17.498 | 1.00th=[23725], 5.00th=[28181], 10.00th=[29492], 20.00th=[30016], 00:27:17.498 | 30.00th=[30802], 40.00th=[31851], 50.00th=[45876], 60.00th=[47449], 00:27:17.498 | 70.00th=[48497], 80.00th=[61080], 90.00th=[64226], 95.00th=[72877], 00:27:17.498 | 99.00th=[80217], 99.50th=[83362], 99.90th=[85459], 99.95th=[87557], 00:27:17.498 | 99.99th=[94897] 00:27:17.498 bw ( KiB/s): min=208384, max=551424, per=9.07%, avg=366617.60, stdev=113984.43, samples=20 00:27:17.498 iops : min= 814, max= 2154, avg=1432.10, stdev=445.25, samples=20 00:27:17.498 lat (msec) : 20=0.40%, 50=74.10%, 100=25.49%, 250=0.01% 00:27:17.498 cpu : usr=0.47%, sys=5.53%, ctx=2937, majf=0, minf=4097 00:27:17.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:17.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.498 issued rwts: total=14384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.498 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.498 job1: (groupid=0, jobs=1): err= 0: pid=1212837: Fri Jul 26 20:48:03 2024 00:27:17.498 read: IOPS=1253, BW=313MiB/s (328MB/s)(3142MiB/10030msec) 00:27:17.498 slat (usec): min=13, max=37038, avg=791.98, stdev=2759.88 00:27:17.498 clat (msec): min=9, max=125, avg=50.23, stdev=22.88 00:27:17.498 lat (msec): min=9, max=125, avg=51.02, stdev=23.37 00:27:17.498 clat percentiles (msec): 00:27:17.498 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:27:17.498 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 48], 00:27:17.498 | 70.00th=[ 68], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 87], 00:27:17.498 | 99.00th=[ 93], 99.50th=[ 97], 99.90th=[ 113], 99.95th=[ 117], 00:27:17.498 | 99.99th=[ 120] 00:27:17.498 bw ( KiB/s): min=172544, max=505344, per=7.92%, avg=320128.00, stdev=143881.53, samples=20 00:27:17.498 iops : min= 674, max= 1974, avg=1250.50, stdev=562.04, samples=20 00:27:17.498 lat (msec) : 10=0.02%, 20=0.18%, 50=60.83%, 100=38.56%, 250=0.42% 00:27:17.498 cpu : usr=0.48%, sys=5.27%, ctx=2389, majf=0, minf=3221 00:27:17.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:17.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.498 issued rwts: total=12568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.498 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.498 job2: (groupid=0, jobs=1): err= 0: pid=1212841: Fri Jul 26 20:48:03 2024 00:27:17.498 read: IOPS=1179, BW=295MiB/s (309MB/s)(2963MiB/10051msec) 00:27:17.498 slat (usec): min=14, max=21766, avg=839.62, stdev=2225.49 00:27:17.498 clat (msec): min=12, max=113, avg=53.38, stdev=14.20 00:27:17.498 lat (msec): min=12, max=113, avg=54.22, stdev=14.53 00:27:17.498 clat percentiles (msec): 00:27:17.498 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 46], 00:27:17.498 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 62], 00:27:17.498 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 79], 00:27:17.498 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 107], 99.95th=[ 112], 00:27:17.498 | 99.99th=[ 114] 00:27:17.498 bw ( KiB/s): min=203264, max=551424, per=7.47%, avg=301798.40, stdev=83646.89, samples=20 00:27:17.498 iops : min= 794, max= 2154, avg=1178.90, stdev=326.75, samples=20 00:27:17.498 lat (msec) : 20=0.24%, 50=49.06%, 100=50.53%, 250=0.18% 00:27:17.498 cpu : usr=0.57%, sys=5.24%, ctx=2301, majf=0, minf=4097 00:27:17.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:17.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.498 issued rwts: total=11852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.498 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.498 job3: (groupid=0, jobs=1): err= 0: pid=1212846: Fri Jul 26 20:48:03 2024 00:27:17.498 read: IOPS=1574, BW=394MiB/s (413MB/s)(3953MiB/10042msec) 00:27:17.498 slat (usec): min=11, max=20588, avg=621.28, stdev=1499.31 00:27:17.498 clat (usec): min=6218, max=83950, avg=39974.52, stdev=11328.62 00:27:17.498 lat (usec): min=6450, max=83969, avg=40595.80, stdev=11550.45 00:27:17.498 clat percentiles (usec): 00:27:17.498 | 1.00th=[26346], 5.00th=[28443], 10.00th=[28705], 20.00th=[29754], 00:27:17.498 | 30.00th=[30540], 40.00th=[31589], 50.00th=[40633], 60.00th=[43779], 00:27:17.498 | 70.00th=[44827], 80.00th=[46400], 90.00th=[61604], 95.00th=[63701], 00:27:17.498 | 99.00th=[67634], 99.50th=[69731], 99.90th=[78119], 99.95th=[80217], 00:27:17.498 | 99.99th=[82314] 00:27:17.498 bw ( KiB/s): min=248832, max=538624, per=9.98%, avg=403200.00, stdev=99457.17, samples=20 00:27:17.498 iops : min= 972, max= 2104, avg=1575.00, stdev=388.50, samples=20 00:27:17.498 lat (msec) : 10=0.18%, 20=0.52%, 50=86.59%, 100=12.70% 00:27:17.498 cpu : usr=0.37%, sys=5.03%, ctx=3316, majf=0, minf=4097 00:27:17.498 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:17.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.498 issued rwts: total=15813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.498 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.498 job4: (groupid=0, jobs=1): err= 0: pid=1212849: Fri Jul 26 20:48:03 2024 00:27:17.498 read: IOPS=1253, BW=313MiB/s (329MB/s)(3143MiB/10027msec) 00:27:17.498 slat (usec): min=13, max=19533, avg=791.49, stdev=2059.43 00:27:17.498 clat (msec): min=12, max=105, avg=50.21, stdev=22.71 00:27:17.498 lat (msec): min=13, max=105, avg=51.00, stdev=23.13 00:27:17.498 clat percentiles (msec): 00:27:17.498 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:27:17.498 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 48], 00:27:17.498 | 70.00th=[ 68], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 87], 00:27:17.498 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 100], 99.95th=[ 102], 00:27:17.498 | 99.99th=[ 106] 00:27:17.498 bw ( KiB/s): min=184176, max=504832, per=7.92%, avg=320223.20, stdev=143786.79, samples=20 00:27:17.499 iops : min= 719, max= 1972, avg=1250.85, stdev=561.69, samples=20 00:27:17.499 lat (msec) : 20=0.14%, 50=60.60%, 100=39.19%, 250=0.07% 00:27:17.499 cpu : usr=0.56%, sys=5.42%, ctx=2453, majf=0, minf=4097 00:27:17.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:17.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.499 issued rwts: total=12571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.499 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.499 job5: (groupid=0, jobs=1): err= 0: pid=1212862: Fri Jul 26 20:48:03 2024 00:27:17.499 read: IOPS=1043, BW=261MiB/s (273MB/s)(2618MiB/10039msec) 00:27:17.499 slat (usec): min=14, max=33136, avg=946.58, stdev=2585.52 00:27:17.499 clat (msec): min=12, max=111, avg=60.35, stdev=17.61 00:27:17.499 lat (msec): min=13, max=113, avg=61.30, stdev=18.03 00:27:17.499 clat percentiles (msec): 00:27:17.499 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 44], 20.00th=[ 45], 00:27:17.499 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 64], 00:27:17.499 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 88], 00:27:17.499 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 105], 99.95th=[ 107], 00:27:17.499 | 99.99th=[ 112] 00:27:17.499 bw ( KiB/s): min=184320, max=361984, per=6.59%, avg=266488.95, stdev=77052.69, samples=20 00:27:17.499 iops : min= 720, max= 1414, avg=1040.95, stdev=301.01, samples=20 00:27:17.499 lat (msec) : 20=0.22%, 50=50.74%, 100=48.84%, 250=0.19% 00:27:17.499 cpu : usr=0.44%, sys=4.94%, ctx=2105, majf=0, minf=4097 00:27:17.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:17.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.499 issued rwts: total=10472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.499 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.499 job6: (groupid=0, jobs=1): err= 0: pid=1212867: Fri Jul 26 20:48:03 2024 00:27:17.499 read: IOPS=927, BW=232MiB/s (243MB/s)(2329MiB/10049msec) 00:27:17.499 slat (usec): min=11, max=42039, avg=1053.33, stdev=3129.69 00:27:17.499 clat (msec): min=12, max=129, avg=67.91, stdev=17.55 00:27:17.499 lat (msec): min=12, max=129, avg=68.96, stdev=18.05 00:27:17.499 clat percentiles (msec): 00:27:17.499 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 62], 00:27:17.499 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 80], 00:27:17.499 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 89], 00:27:17.499 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 118], 99.95th=[ 121], 00:27:17.499 | 99.99th=[ 130] 00:27:17.499 bw ( KiB/s): min=183663, max=446976, per=5.86%, avg=236895.15, stdev=65005.54, samples=20 00:27:17.499 iops : min= 717, max= 1746, avg=925.35, stdev=253.95, samples=20 00:27:17.499 lat (msec) : 20=0.23%, 50=15.56%, 100=83.69%, 250=0.52% 00:27:17.499 cpu : usr=0.34%, sys=3.64%, ctx=2049, majf=0, minf=4097 00:27:17.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:27:17.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.499 issued rwts: total=9316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.499 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.499 job7: (groupid=0, jobs=1): err= 0: pid=1212871: Fri Jul 26 20:48:03 2024 00:27:17.499 read: IOPS=2342, BW=586MiB/s (614MB/s)(5886MiB/10051msec) 00:27:17.499 slat (usec): min=11, max=25334, avg=414.46, stdev=1408.28 00:27:17.499 clat (usec): min=1042, max=108344, avg=26879.88, stdev=19199.03 00:27:17.499 lat (usec): min=1098, max=108401, avg=27294.34, stdev=19522.79 00:27:17.499 clat percentiles (msec): 00:27:17.499 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 15], 20.00th=[ 16], 00:27:17.499 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:27:17.499 | 70.00th=[ 30], 80.00th=[ 41], 90.00th=[ 64], 95.00th=[ 67], 00:27:17.499 | 99.00th=[ 81], 99.50th=[ 84], 99.90th=[ 92], 99.95th=[ 103], 00:27:17.499 | 99.99th=[ 109] 00:27:17.499 bw ( KiB/s): min=198144, max=1055232, per=14.88%, avg=601099.45, stdev=364000.74, samples=20 00:27:17.499 iops : min= 774, max= 4122, avg=2348.00, stdev=1421.91, samples=20 00:27:17.499 lat (msec) : 2=0.03%, 4=0.13%, 10=0.37%, 20=65.50%, 50=18.47% 00:27:17.499 lat (msec) : 100=15.44%, 250=0.06% 00:27:17.499 cpu : usr=0.44%, sys=6.57%, ctx=4927, majf=0, minf=4097 00:27:17.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:17.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.499 issued rwts: total=23542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.499 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.499 job8: (groupid=0, jobs=1): err= 0: pid=1212886: Fri Jul 26 20:48:03 2024 00:27:17.499 read: IOPS=2007, BW=502MiB/s (526MB/s)(5040MiB/10041msec) 00:27:17.499 slat (usec): min=11, max=11505, avg=493.34, stdev=1197.61 00:27:17.499 clat (usec): min=7185, max=80413, avg=31345.89, stdev=11188.72 00:27:17.499 lat (usec): min=7472, max=80448, avg=31839.24, stdev=11390.44 00:27:17.499 clat percentiles (usec): 00:27:17.499 | 1.00th=[13304], 5.00th=[14484], 10.00th=[15008], 20.00th=[15795], 00:27:17.499 | 30.00th=[28705], 40.00th=[29492], 50.00th=[30540], 60.00th=[31327], 00:27:17.499 | 70.00th=[40633], 80.00th=[44303], 90.00th=[45876], 95.00th=[46924], 00:27:17.499 | 99.00th=[50594], 99.50th=[52167], 99.90th=[66847], 99.95th=[76022], 00:27:17.499 | 99.99th=[80217] 00:27:17.499 bw ( KiB/s): min=351744, max=1072640, per=12.73%, avg=514529.10, stdev=198279.06, samples=20 00:27:17.499 iops : min= 1374, max= 4190, avg=2009.85, stdev=774.54, samples=20 00:27:17.499 lat (msec) : 10=0.12%, 20=22.88%, 50=75.81%, 100=1.19% 00:27:17.499 cpu : usr=0.55%, sys=6.84%, ctx=3761, majf=0, minf=4097 00:27:17.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:17.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.499 issued rwts: total=20160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.499 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.499 job9: (groupid=0, jobs=1): err= 0: pid=1212896: Fri Jul 26 20:48:03 2024 00:27:17.499 read: IOPS=1732, BW=433MiB/s (454MB/s)(4343MiB/10028msec) 00:27:17.499 slat (usec): min=11, max=22379, avg=566.17, stdev=1761.47 00:27:17.499 clat (msec): min=12, max=104, avg=36.34, stdev=24.94 00:27:17.499 lat (msec): min=12, max=107, avg=36.91, stdev=25.37 00:27:17.499 clat percentiles (msec): 00:27:17.499 | 1.00th=[ 15], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 16], 00:27:17.499 | 30.00th=[ 17], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:27:17.499 | 70.00th=[ 32], 80.00th=[ 79], 90.00th=[ 82], 95.00th=[ 86], 00:27:17.499 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 100], 99.95th=[ 100], 00:27:17.499 | 99.99th=[ 106] 00:27:17.499 bw ( KiB/s): min=184320, max=1041920, per=10.96%, avg=443084.80, stdev=281836.76, samples=20 00:27:17.499 iops : min= 720, max= 4070, avg=1730.80, stdev=1100.92, samples=20 00:27:17.499 lat (msec) : 20=36.01%, 50=42.99%, 100=20.96%, 250=0.03% 00:27:17.499 cpu : usr=0.29%, sys=5.38%, ctx=3715, majf=0, minf=4097 00:27:17.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:17.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.499 issued rwts: total=17371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.499 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.499 job10: (groupid=0, jobs=1): err= 0: pid=1212906: Fri Jul 26 20:48:03 2024 00:27:17.499 read: IOPS=1055, BW=264MiB/s (277MB/s)(2651MiB/10051msec) 00:27:17.499 slat (usec): min=11, max=26757, avg=924.95, stdev=2370.64 00:27:17.499 clat (msec): min=12, max=115, avg=59.66, stdev=12.87 00:27:17.499 lat (msec): min=12, max=119, avg=60.59, stdev=13.22 00:27:17.499 clat percentiles (msec): 00:27:17.499 | 1.00th=[ 42], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 48], 00:27:17.499 | 30.00th=[ 49], 40.00th=[ 52], 50.00th=[ 63], 60.00th=[ 64], 00:27:17.499 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 80], 95.00th=[ 87], 00:27:17.499 | 99.00th=[ 91], 99.50th=[ 95], 99.90th=[ 111], 99.95th=[ 111], 00:27:17.499 | 99.99th=[ 114] 00:27:17.499 bw ( KiB/s): min=185344, max=342528, per=6.68%, avg=269894.40, stdev=51706.29, samples=20 00:27:17.499 iops : min= 724, max= 1338, avg=1054.25, stdev=202.02, samples=20 00:27:17.499 lat (msec) : 20=0.29%, 50=36.49%, 100=62.99%, 250=0.23% 00:27:17.499 cpu : usr=0.39%, sys=4.31%, ctx=2199, majf=0, minf=4097 00:27:17.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:17.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:17.499 issued rwts: total=10605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.499 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:17.499 00:27:17.499 Run status group 0 (all jobs): 00:27:17.499 READ: bw=3946MiB/s (4138MB/s), 232MiB/s-586MiB/s (243MB/s-614MB/s), io=38.7GiB (41.6GB), run=10027-10051msec 00:27:17.499 00:27:17.499 Disk stats (read/write): 00:27:17.499 nvme0n1: ios=28180/0, merge=0/0, ticks=1220567/0, in_queue=1220567, util=96.78% 00:27:17.499 nvme10n1: ios=24553/0, merge=0/0, ticks=1222281/0, in_queue=1222281, util=97.01% 00:27:17.499 nvme1n1: ios=23367/0, merge=0/0, ticks=1221705/0, in_queue=1221705, util=97.33% 00:27:17.499 nvme2n1: ios=31188/0, merge=0/0, ticks=1218151/0, in_queue=1218151, util=97.53% 00:27:17.499 nvme3n1: ios=24567/0, merge=0/0, ticks=1223333/0, in_queue=1223333, util=97.60% 00:27:17.499 nvme4n1: ios=20478/0, merge=0/0, ticks=1222736/0, in_queue=1222736, util=98.03% 00:27:17.499 nvme5n1: ios=18288/0, merge=0/0, ticks=1220621/0, in_queue=1220621, util=98.24% 00:27:17.499 nvme6n1: ios=46724/0, merge=0/0, ticks=1214667/0, in_queue=1214667, util=98.38% 00:27:17.499 nvme7n1: ios=39872/0, merge=0/0, ticks=1216694/0, in_queue=1216694, util=98.88% 00:27:17.499 nvme8n1: ios=34150/0, merge=0/0, ticks=1218951/0, in_queue=1218951, util=99.13% 00:27:17.499 nvme9n1: ios=20850/0, merge=0/0, ticks=1220328/0, in_queue=1220328, util=99.29% 00:27:17.499 20:48:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:17.499 [global] 00:27:17.500 thread=1 00:27:17.500 invalidate=1 00:27:17.500 rw=randwrite 00:27:17.500 time_based=1 00:27:17.500 runtime=10 00:27:17.500 ioengine=libaio 00:27:17.500 direct=1 00:27:17.500 bs=262144 00:27:17.500 iodepth=64 00:27:17.500 norandommap=1 00:27:17.500 numjobs=1 00:27:17.500 00:27:17.500 [job0] 00:27:17.500 filename=/dev/nvme0n1 00:27:17.500 [job1] 00:27:17.500 filename=/dev/nvme10n1 00:27:17.500 [job2] 00:27:17.500 filename=/dev/nvme1n1 00:27:17.500 [job3] 00:27:17.500 filename=/dev/nvme2n1 00:27:17.500 [job4] 00:27:17.500 filename=/dev/nvme3n1 00:27:17.500 [job5] 00:27:17.500 filename=/dev/nvme4n1 00:27:17.500 [job6] 00:27:17.500 filename=/dev/nvme5n1 00:27:17.500 [job7] 00:27:17.500 filename=/dev/nvme6n1 00:27:17.500 [job8] 00:27:17.500 filename=/dev/nvme7n1 00:27:17.500 [job9] 00:27:17.500 filename=/dev/nvme8n1 00:27:17.500 [job10] 00:27:17.500 filename=/dev/nvme9n1 00:27:17.500 Could not set queue depth (nvme0n1) 00:27:17.500 Could not set queue depth (nvme10n1) 00:27:17.500 Could not set queue depth (nvme1n1) 00:27:17.500 Could not set queue depth (nvme2n1) 00:27:17.500 Could not set queue depth (nvme3n1) 00:27:17.500 Could not set queue depth (nvme4n1) 00:27:17.500 Could not set queue depth (nvme5n1) 00:27:17.500 Could not set queue depth (nvme6n1) 00:27:17.500 Could not set queue depth (nvme7n1) 00:27:17.500 Could not set queue depth (nvme8n1) 00:27:17.500 Could not set queue depth (nvme9n1) 00:27:17.500 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:17.500 fio-3.35 00:27:17.500 Starting 11 threads 00:27:27.479 00:27:27.479 job0: (groupid=0, jobs=1): err= 0: pid=1214899: Fri Jul 26 20:48:15 2024 00:27:27.479 write: IOPS=754, BW=189MiB/s (198MB/s)(1900MiB/10076msec); 0 zone resets 00:27:27.479 slat (usec): min=24, max=46602, avg=1304.51, stdev=2853.86 00:27:27.479 clat (msec): min=2, max=162, avg=83.51, stdev=19.54 00:27:27.480 lat (msec): min=2, max=163, avg=84.81, stdev=19.89 00:27:27.480 clat percentiles (msec): 00:27:27.480 | 1.00th=[ 51], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 69], 00:27:27.480 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 92], 00:27:27.480 | 70.00th=[ 100], 80.00th=[ 104], 90.00th=[ 107], 95.00th=[ 110], 00:27:27.480 | 99.00th=[ 118], 99.50th=[ 126], 99.90th=[ 155], 99.95th=[ 155], 00:27:27.480 | 99.99th=[ 163] 00:27:27.480 bw ( KiB/s): min=150528, max=293376, per=5.64%, avg=192950.45, stdev=42421.03, samples=20 00:27:27.480 iops : min= 588, max= 1146, avg=753.70, stdev=165.70, samples=20 00:27:27.480 lat (msec) : 4=0.11%, 10=0.11%, 20=0.11%, 50=0.66%, 100=70.10% 00:27:27.480 lat (msec) : 250=28.93% 00:27:27.480 cpu : usr=1.92%, sys=3.47%, ctx=1903, majf=0, minf=1 00:27:27.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.480 issued rwts: total=0,7601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.480 job1: (groupid=0, jobs=1): err= 0: pid=1214925: Fri Jul 26 20:48:15 2024 00:27:27.480 write: IOPS=1213, BW=303MiB/s (318MB/s)(3055MiB/10072msec); 0 zone resets 00:27:27.480 slat (usec): min=15, max=39322, avg=801.02, stdev=2278.91 00:27:27.480 clat (usec): min=1487, max=162573, avg=51938.38, stdev=31960.83 00:27:27.480 lat (usec): min=1577, max=162654, avg=52739.40, stdev=32495.92 00:27:27.480 clat percentiles (msec): 00:27:27.480 | 1.00th=[ 16], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 34], 00:27:27.480 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:27:27.480 | 70.00th=[ 46], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 107], 00:27:27.480 | 99.00th=[ 113], 99.50th=[ 123], 99.90th=[ 140], 99.95th=[ 161], 00:27:27.480 | 99.99th=[ 163] 00:27:27.480 bw ( KiB/s): min=150016, max=700928, per=9.10%, avg=311168.00, stdev=176919.90, samples=20 00:27:27.480 iops : min= 586, max= 2738, avg=1215.50, stdev=691.09, samples=20 00:27:27.480 lat (msec) : 2=0.02%, 4=0.07%, 10=0.25%, 20=10.49%, 50=59.57% 00:27:27.480 lat (msec) : 100=12.11%, 250=17.49% 00:27:27.480 cpu : usr=2.42%, sys=3.59%, ctx=2932, majf=0, minf=1 00:27:27.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.480 issued rwts: total=0,12218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.480 job2: (groupid=0, jobs=1): err= 0: pid=1214950: Fri Jul 26 20:48:15 2024 00:27:27.480 write: IOPS=1494, BW=374MiB/s (392MB/s)(3751MiB/10037msec); 0 zone resets 00:27:27.480 slat (usec): min=19, max=12480, avg=643.09, stdev=1415.29 00:27:27.480 clat (msec): min=9, max=100, avg=42.16, stdev=22.45 00:27:27.480 lat (msec): min=9, max=100, avg=42.81, stdev=22.79 00:27:27.480 clat percentiles (msec): 00:27:27.480 | 1.00th=[ 17], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 18], 00:27:27.480 | 30.00th=[ 20], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 49], 00:27:27.480 | 70.00th=[ 53], 80.00th=[ 69], 90.00th=[ 77], 95.00th=[ 83], 00:27:27.480 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 99], 99.95th=[ 99], 00:27:27.480 | 99.99th=[ 101] 00:27:27.480 bw ( KiB/s): min=197120, max=918016, per=11.19%, avg=382438.40, stdev=224938.64, samples=20 00:27:27.480 iops : min= 770, max= 3586, avg=1493.90, stdev=878.67, samples=20 00:27:27.480 lat (msec) : 10=0.01%, 20=31.00%, 50=31.46%, 100=37.51%, 250=0.02% 00:27:27.480 cpu : usr=2.79%, sys=5.00%, ctx=3671, majf=0, minf=1 00:27:27.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:27:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.480 issued rwts: total=0,15002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.480 job3: (groupid=0, jobs=1): err= 0: pid=1214961: Fri Jul 26 20:48:15 2024 00:27:27.480 write: IOPS=800, BW=200MiB/s (210MB/s)(2015MiB/10070msec); 0 zone resets 00:27:27.480 slat (usec): min=22, max=53818, avg=1216.13, stdev=2640.73 00:27:27.480 clat (msec): min=8, max=179, avg=78.70, stdev=25.11 00:27:27.480 lat (msec): min=8, max=179, avg=79.92, stdev=25.54 00:27:27.480 clat percentiles (msec): 00:27:27.480 | 1.00th=[ 17], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 58], 00:27:27.480 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 90], 00:27:27.480 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 107], 95.00th=[ 109], 00:27:27.480 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 157], 99.95th=[ 161], 00:27:27.480 | 99.99th=[ 180] 00:27:27.480 bw ( KiB/s): min=149504, max=437098, per=5.99%, avg=204792.50, stdev=69931.55, samples=20 00:27:27.480 iops : min= 584, max= 1707, avg=799.95, stdev=273.10, samples=20 00:27:27.480 lat (msec) : 10=0.05%, 20=2.39%, 50=12.22%, 100=58.36%, 250=26.98% 00:27:27.480 cpu : usr=1.63%, sys=2.98%, ctx=2044, majf=0, minf=1 00:27:27.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.480 issued rwts: total=0,8061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.480 job4: (groupid=0, jobs=1): err= 0: pid=1214966: Fri Jul 26 20:48:15 2024 00:27:27.480 write: IOPS=778, BW=195MiB/s (204MB/s)(1961MiB/10071msec); 0 zone resets 00:27:27.480 slat (usec): min=23, max=17795, avg=1271.30, stdev=2559.38 00:27:27.480 clat (msec): min=11, max=164, avg=80.88, stdev=19.65 00:27:27.480 lat (msec): min=11, max=164, avg=82.15, stdev=20.01 00:27:27.480 clat percentiles (msec): 00:27:27.480 | 1.00th=[ 51], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 64], 00:27:27.480 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 89], 00:27:27.480 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 106], 95.00th=[ 108], 00:27:27.480 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 155], 99.95th=[ 157], 00:27:27.480 | 99.99th=[ 165] 00:27:27.480 bw ( KiB/s): min=151040, max=291328, per=5.83%, avg=199168.00, stdev=47581.07, samples=20 00:27:27.480 iops : min= 590, max= 1138, avg=778.00, stdev=185.86, samples=20 00:27:27.480 lat (msec) : 20=0.15%, 50=0.65%, 100=71.57%, 250=27.63% 00:27:27.480 cpu : usr=1.87%, sys=3.26%, ctx=1901, majf=0, minf=1 00:27:27.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.480 issued rwts: total=0,7843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.480 job5: (groupid=0, jobs=1): err= 0: pid=1214987: Fri Jul 26 20:48:15 2024 00:27:27.480 write: IOPS=777, BW=194MiB/s (204MB/s)(1957MiB/10072msec); 0 zone resets 00:27:27.480 slat (usec): min=24, max=22128, avg=1271.86, stdev=2660.76 00:27:27.480 clat (msec): min=10, max=164, avg=81.04, stdev=20.01 00:27:27.480 lat (msec): min=10, max=164, avg=82.31, stdev=20.37 00:27:27.480 clat percentiles (msec): 00:27:27.480 | 1.00th=[ 51], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 63], 00:27:27.480 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 89], 00:27:27.480 | 70.00th=[ 99], 80.00th=[ 104], 90.00th=[ 107], 95.00th=[ 110], 00:27:27.480 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 150], 99.95th=[ 150], 00:27:27.480 | 99.99th=[ 165] 00:27:27.480 bw ( KiB/s): min=147968, max=292864, per=5.82%, avg=198809.60, stdev=48337.13, samples=20 00:27:27.480 iops : min= 578, max= 1144, avg=776.60, stdev=188.82, samples=20 00:27:27.480 lat (msec) : 20=0.17%, 50=0.63%, 100=71.11%, 250=28.10% 00:27:27.480 cpu : usr=2.08%, sys=3.20%, ctx=1959, majf=0, minf=1 00:27:27.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.480 issued rwts: total=0,7829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.480 job6: (groupid=0, jobs=1): err= 0: pid=1214996: Fri Jul 26 20:48:15 2024 00:27:27.480 write: IOPS=1156, BW=289MiB/s (303MB/s)(2901MiB/10037msec); 0 zone resets 00:27:27.480 slat (usec): min=22, max=13598, avg=856.73, stdev=1574.00 00:27:27.480 clat (msec): min=17, max=101, avg=54.48, stdev=16.64 00:27:27.480 lat (msec): min=18, max=101, avg=55.33, stdev=16.87 00:27:27.480 clat percentiles (msec): 00:27:27.480 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:27:27.480 | 30.00th=[ 39], 40.00th=[ 51], 50.00th=[ 53], 60.00th=[ 56], 00:27:27.480 | 70.00th=[ 58], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 84], 00:27:27.480 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 97], 99.95th=[ 101], 00:27:27.480 | 99.99th=[ 102] 00:27:27.480 bw ( KiB/s): min=193024, max=441856, per=8.64%, avg=295475.20, stdev=89533.21, samples=20 00:27:27.480 iops : min= 754, max= 1726, avg=1154.20, stdev=349.74, samples=20 00:27:27.480 lat (msec) : 20=0.03%, 50=40.18%, 100=59.73%, 250=0.05% 00:27:27.480 cpu : usr=2.77%, sys=4.65%, ctx=2912, majf=0, minf=1 00:27:27.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:27.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.480 issued rwts: total=0,11605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.480 job7: (groupid=0, jobs=1): err= 0: pid=1215007: Fri Jul 26 20:48:15 2024 00:27:27.480 write: IOPS=791, BW=198MiB/s (208MB/s)(1993MiB/10067msec); 0 zone resets 00:27:27.480 slat (usec): min=22, max=32272, avg=1194.13, stdev=2747.27 00:27:27.480 clat (msec): min=6, max=159, avg=79.60, stdev=23.18 00:27:27.480 lat (msec): min=6, max=159, avg=80.80, stdev=23.59 00:27:27.480 clat percentiles (msec): 00:27:27.480 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 65], 00:27:27.480 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 92], 00:27:27.480 | 70.00th=[ 97], 80.00th=[ 104], 90.00th=[ 106], 95.00th=[ 109], 00:27:27.480 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 148], 99.95th=[ 153], 00:27:27.480 | 99.99th=[ 159] 00:27:27.481 bw ( KiB/s): min=150016, max=446976, per=5.92%, avg=202444.80, stdev=68611.70, samples=20 00:27:27.481 iops : min= 586, max= 1746, avg=790.80, stdev=268.01, samples=20 00:27:27.481 lat (msec) : 10=0.10%, 20=0.24%, 50=11.68%, 100=60.97%, 250=27.01% 00:27:27.481 cpu : usr=1.75%, sys=2.93%, ctx=2098, majf=0, minf=1 00:27:27.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.481 issued rwts: total=0,7971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.481 job8: (groupid=0, jobs=1): err= 0: pid=1215056: Fri Jul 26 20:48:15 2024 00:27:27.481 write: IOPS=2567, BW=642MiB/s (673MB/s)(6437MiB/10026msec); 0 zone resets 00:27:27.481 slat (usec): min=16, max=4510, avg=383.70, stdev=756.17 00:27:27.481 clat (usec): min=8241, max=82894, avg=24532.00, stdev=8296.31 00:27:27.481 lat (usec): min=8266, max=82919, avg=24915.70, stdev=8410.35 00:27:27.481 clat percentiles (usec): 00:27:27.481 | 1.00th=[15795], 5.00th=[17171], 10.00th=[17433], 20.00th=[17957], 00:27:27.481 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19006], 60.00th=[20055], 00:27:27.481 | 70.00th=[33162], 80.00th=[34341], 90.00th=[36439], 95.00th=[37487], 00:27:27.481 | 99.00th=[39060], 99.50th=[39584], 99.90th=[50070], 99.95th=[57934], 00:27:27.481 | 99.99th=[79168] 00:27:27.481 bw ( KiB/s): min=430080, max=887296, per=19.23%, avg=657484.80, stdev=192752.60, samples=20 00:27:27.481 iops : min= 1680, max= 3466, avg=2568.30, stdev=752.94, samples=20 00:27:27.481 lat (msec) : 10=0.02%, 20=60.11%, 50=39.78%, 100=0.10% 00:27:27.481 cpu : usr=3.73%, sys=5.50%, ctx=5559, majf=0, minf=1 00:27:27.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.481 issued rwts: total=0,25746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.481 job9: (groupid=0, jobs=1): err= 0: pid=1215077: Fri Jul 26 20:48:15 2024 00:27:27.481 write: IOPS=1902, BW=476MiB/s (499MB/s)(4768MiB/10026msec); 0 zone resets 00:27:27.481 slat (usec): min=17, max=10427, avg=510.97, stdev=1129.40 00:27:27.481 clat (usec): min=651, max=72004, avg=33122.15, stdev=11861.83 00:27:27.481 lat (usec): min=756, max=75028, avg=33633.13, stdev=12056.15 00:27:27.481 clat percentiles (usec): 00:27:27.481 | 1.00th=[11731], 5.00th=[16909], 10.00th=[17433], 20.00th=[18482], 00:27:27.481 | 30.00th=[32637], 40.00th=[33817], 50.00th=[34341], 60.00th=[34866], 00:27:27.481 | 70.00th=[35914], 80.00th=[36963], 90.00th=[42730], 95.00th=[61080], 00:27:27.481 | 99.00th=[66847], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:27:27.481 | 99.99th=[71828] 00:27:27.481 bw ( KiB/s): min=251904, max=914944, per=14.24%, avg=486656.00, stdev=169262.15, samples=20 00:27:27.481 iops : min= 984, max= 3574, avg=1901.00, stdev=661.18, samples=20 00:27:27.481 lat (usec) : 750=0.02%, 1000=0.03% 00:27:27.481 lat (msec) : 2=0.07%, 4=0.24%, 10=0.45%, 20=22.80%, 50=67.06% 00:27:27.481 lat (msec) : 100=9.34% 00:27:27.481 cpu : usr=3.43%, sys=4.99%, ctx=4552, majf=0, minf=1 00:27:27.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.481 issued rwts: total=0,19073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.481 job10: (groupid=0, jobs=1): err= 0: pid=1215093: Fri Jul 26 20:48:15 2024 00:27:27.481 write: IOPS=1156, BW=289MiB/s (303MB/s)(2902MiB/10037msec); 0 zone resets 00:27:27.481 slat (usec): min=21, max=13759, avg=856.52, stdev=1581.95 00:27:27.481 clat (usec): min=18020, max=98286, avg=54467.20, stdev=16656.12 00:27:27.481 lat (msec): min=18, max=102, avg=55.32, stdev=16.89 00:27:27.481 clat percentiles (usec): 00:27:27.481 | 1.00th=[33817], 5.00th=[35390], 10.00th=[36439], 20.00th=[36963], 00:27:27.481 | 30.00th=[38011], 40.00th=[50070], 50.00th=[52691], 60.00th=[55313], 00:27:27.481 | 70.00th=[57934], 80.00th=[73925], 90.00th=[78119], 95.00th=[84411], 00:27:27.481 | 99.00th=[92799], 99.50th=[94897], 99.90th=[96994], 99.95th=[96994], 00:27:27.481 | 99.99th=[98042] 00:27:27.481 bw ( KiB/s): min=194048, max=441344, per=8.64%, avg=295526.40, stdev=89461.48, samples=20 00:27:27.481 iops : min= 758, max= 1724, avg=1154.40, stdev=349.46, samples=20 00:27:27.481 lat (msec) : 20=0.03%, 50=40.17%, 100=59.80% 00:27:27.481 cpu : usr=2.73%, sys=4.57%, ctx=2906, majf=0, minf=1 00:27:27.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.481 issued rwts: total=0,11607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.481 00:27:27.481 Run status group 0 (all jobs): 00:27:27.481 WRITE: bw=3339MiB/s (3501MB/s), 189MiB/s-642MiB/s (198MB/s-673MB/s), io=32.9GiB (35.3GB), run=10026-10076msec 00:27:27.481 00:27:27.481 Disk stats (read/write): 00:27:27.481 nvme0n1: ios=49/14884, merge=0/0, ticks=13/1212072, in_queue=1212085, util=96.62% 00:27:27.481 nvme10n1: ios=0/24117, merge=0/0, ticks=0/1213545, in_queue=1213545, util=96.76% 00:27:27.481 nvme1n1: ios=0/29542, merge=0/0, ticks=0/1215738, in_queue=1215738, util=97.12% 00:27:27.481 nvme2n1: ios=0/15826, merge=0/0, ticks=0/1213746, in_queue=1213746, util=97.30% 00:27:27.481 nvme3n1: ios=0/15374, merge=0/0, ticks=0/1212131, in_queue=1212131, util=97.39% 00:27:27.481 nvme4n1: ios=0/15345, merge=0/0, ticks=0/1210267, in_queue=1210267, util=97.83% 00:27:27.481 nvme5n1: ios=0/22741, merge=0/0, ticks=0/1215385, in_queue=1215385, util=98.00% 00:27:27.481 nvme6n1: ios=0/15647, merge=0/0, ticks=0/1213279, in_queue=1213279, util=98.15% 00:27:27.481 nvme7n1: ios=0/50849, merge=0/0, ticks=0/1225818, in_queue=1225818, util=98.64% 00:27:27.481 nvme8n1: ios=0/37507, merge=0/0, ticks=0/1216808, in_queue=1216808, util=98.88% 00:27:27.481 nvme9n1: ios=0/22751, merge=0/0, ticks=0/1215422, in_queue=1215422, util=99.05% 00:27:27.481 20:48:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:27.481 20:48:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:27.481 20:48:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.481 20:48:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:27.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:27.740 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:27.740 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:27.740 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:27.740 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:27.740 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:27.740 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:27.741 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:27.741 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.741 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.741 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:27.741 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.741 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.741 20:48:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:28.678 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.678 20:48:17 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:29.616 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.616 20:48:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:30.552 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:30.552 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:30.552 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.552 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.552 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:30.552 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.552 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:30.811 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.811 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:30.811 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.811 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.811 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.811 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.811 20:48:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:31.748 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.749 20:48:20 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:32.687 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:32.687 20:48:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:33.624 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:33.624 20:48:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:34.562 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.562 20:48:23 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:35.498 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:35.498 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:35.498 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:35.498 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:35.498 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:35.757 20:48:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:36.693 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.693 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:37.631 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:37.631 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:37.631 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:37.631 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:37.631 20:48:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:37.631 rmmod nvme_rdma 00:27:37.631 rmmod nvme_fabrics 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1206624 ']' 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1206624 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1206624 ']' 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1206624 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1206624 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1206624' 00:27:37.631 killing process with pid 1206624 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1206624 00:27:37.631 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1206624 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:38.226 00:27:38.226 real 1m16.131s 00:27:38.226 user 4m53.936s 00:27:38.226 sys 0m19.847s 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.226 ************************************ 00:27:38.226 END TEST nvmf_multiconnection 00:27:38.226 ************************************ 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:38.226 ************************************ 00:27:38.226 START TEST nvmf_initiator_timeout 00:27:38.226 ************************************ 00:27:38.226 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:27:38.486 * Looking for test storage... 00:27:38.486 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:38.486 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.487 20:48:26 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:46.607 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:46.607 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:46.607 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:46.607 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:46.607 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:46.608 20:48:34 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:46.608 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:46.608 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:46.608 altname enp217s0f0np0 00:27:46.608 altname ens818f0np0 00:27:46.608 inet 192.168.100.8/24 scope global mlx_0_0 00:27:46.608 valid_lft forever preferred_lft forever 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:46.608 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:46.608 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:46.608 altname enp217s0f1np1 00:27:46.608 altname ens818f1np1 00:27:46.608 inet 192.168.100.9/24 scope global mlx_0_1 00:27:46.608 valid_lft forever preferred_lft forever 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:46.608 192.168.100.9' 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:46.608 192.168.100.9' 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:27:46.608 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:46.867 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:46.868 192.168.100.9' 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1222713 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1222713 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1222713 ']' 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.868 20:48:35 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:46.868 [2024-07-26 20:48:35.231343] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:27:46.868 [2024-07-26 20:48:35.231397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.868 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.868 [2024-07-26 20:48:35.320708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.868 [2024-07-26 20:48:35.361935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.868 [2024-07-26 20:48:35.361973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.868 [2024-07-26 20:48:35.361983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.868 [2024-07-26 20:48:35.361992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.868 [2024-07-26 20:48:35.361999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.868 [2024-07-26 20:48:35.362046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.868 [2024-07-26 20:48:35.362064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.868 [2024-07-26 20:48:35.362275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.868 [2024-07-26 20:48:35.362276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.804 Malloc0 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.804 Delay0 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.804 [2024-07-26 20:48:36.154933] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1684c40/0x16a4a40) succeed. 00:27:47.804 [2024-07-26 20:48:36.164581] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1686280/0x16e60d0) succeed. 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.804 [2024-07-26 20:48:36.308474] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.804 20:48:36 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:27:49.180 20:48:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:49.180 20:48:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:49.180 20:48:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:49.180 20:48:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:49.180 20:48:37 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1223431 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:51.089 20:48:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:51.089 [global] 00:27:51.089 thread=1 00:27:51.089 invalidate=1 00:27:51.089 rw=write 00:27:51.089 time_based=1 00:27:51.089 runtime=60 00:27:51.089 ioengine=libaio 00:27:51.089 direct=1 00:27:51.089 bs=4096 00:27:51.089 iodepth=1 00:27:51.089 norandommap=0 00:27:51.089 numjobs=1 00:27:51.089 00:27:51.089 verify_dump=1 00:27:51.089 verify_backlog=512 00:27:51.089 verify_state_save=0 00:27:51.089 do_verify=1 00:27:51.089 verify=crc32c-intel 00:27:51.089 [job0] 00:27:51.089 filename=/dev/nvme0n1 00:27:51.089 Could not set queue depth (nvme0n1) 00:27:51.347 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:51.347 fio-3.35 00:27:51.347 Starting 1 thread 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:53.882 true 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:53.882 true 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:53.882 true 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:53.882 true 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.882 20:48:42 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.169 true 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.169 true 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.169 true 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.169 true 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:57.169 20:48:45 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1223431 00:28:53.448 00:28:53.448 job0: (groupid=0, jobs=1): err= 0: pid=1223570: Fri Jul 26 20:49:39 2024 00:28:53.448 read: IOPS=1328, BW=5315KiB/s (5443kB/s)(311MiB/60000msec) 00:28:53.448 slat (usec): min=2, max=11872, avg= 8.37, stdev=59.27 00:28:53.448 clat (usec): min=54, max=42273k, avg=631.31, stdev=149713.42 00:28:53.448 lat (usec): min=83, max=42273k, avg=639.68, stdev=149713.44 00:28:53.448 clat percentiles (usec): 00:28:53.448 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:28:53.448 | 30.00th=[ 98], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 103], 00:28:53.448 | 70.00th=[ 104], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 113], 00:28:53.448 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 143], 00:28:53.448 | 99.99th=[ 265] 00:28:53.448 write: IOPS=1331, BW=5325KiB/s (5453kB/s)(312MiB/60000msec); 0 zone resets 00:28:53.448 slat (usec): min=3, max=280, avg= 9.97, stdev= 3.18 00:28:53.448 clat (usec): min=3, max=1380, avg=98.75, stdev= 8.87 00:28:53.448 lat (usec): min=81, max=1395, avg=108.72, stdev= 9.94 00:28:53.448 clat percentiles (usec): 00:28:53.448 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 93], 00:28:53.448 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 99], 60.00th=[ 100], 00:28:53.448 | 70.00th=[ 102], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 111], 00:28:53.448 | 99.00th=[ 116], 99.50th=[ 119], 99.90th=[ 127], 99.95th=[ 137], 00:28:53.448 | 99.99th=[ 285] 00:28:53.448 bw ( KiB/s): min= 2768, max=20352, per=100.00%, avg=17320.00, stdev=3373.80, samples=36 00:28:53.448 iops : min= 692, max= 5088, avg=4330.00, stdev=843.45, samples=36 00:28:53.448 lat (usec) : 4=0.01%, 100=52.11%, 250=47.88%, 500=0.01%, 750=0.01% 00:28:53.448 lat (msec) : 2=0.01%, >=2000=0.01% 00:28:53.448 cpu : usr=1.56%, sys=3.16%, ctx=159606, majf=0, minf=108 00:28:53.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.448 issued rwts: total=79725,79872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:53.448 00:28:53.448 Run status group 0 (all jobs): 00:28:53.448 READ: bw=5315KiB/s (5443kB/s), 5315KiB/s-5315KiB/s (5443kB/s-5443kB/s), io=311MiB (327MB), run=60000-60000msec 00:28:53.448 WRITE: bw=5325KiB/s (5453kB/s), 5325KiB/s-5325KiB/s (5453kB/s-5453kB/s), io=312MiB (327MB), run=60000-60000msec 00:28:53.448 00:28:53.448 Disk stats (read/write): 00:28:53.448 nvme0n1: ios=79658/79378, merge=0/0, ticks=7633/7416, in_queue=15049, util=99.58% 00:28:53.448 20:49:39 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:53.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:53.448 nvmf hotplug test: fio successful as expected 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:53.448 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:53.449 rmmod nvme_rdma 00:28:53.449 rmmod nvme_fabrics 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1222713 ']' 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1222713 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1222713 ']' 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1222713 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1222713 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1222713' 00:28:53.449 killing process with pid 1222713 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1222713 00:28:53.449 20:49:40 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1222713 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:53.449 00:28:53.449 real 1m14.499s 00:28:53.449 user 4m34.513s 00:28:53.449 sys 0m8.939s 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:53.449 ************************************ 00:28:53.449 END TEST nvmf_initiator_timeout 00:28:53.449 ************************************ 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' rdma = tcp ']' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # [[ rdma == \r\d\m\a ]] 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@61 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:53.449 ************************************ 00:28:53.449 START TEST nvmf_srq_overwhelm 00:28:53.449 ************************************ 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:28:53.449 * Looking for test storage... 00:28:53.449 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:28:53.449 20:49:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:01.576 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.576 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:01.577 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:01.577 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:01.577 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:01.577 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:01.577 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:01.578 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:01.578 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:01.578 altname enp217s0f0np0 00:29:01.578 altname ens818f0np0 00:29:01.578 inet 192.168.100.8/24 scope global mlx_0_0 00:29:01.578 valid_lft forever preferred_lft forever 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:01.578 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:01.578 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:01.578 altname enp217s0f1np1 00:29:01.578 altname ens818f1np1 00:29:01.578 inet 192.168.100.9/24 scope global mlx_0_1 00:29:01.578 valid_lft forever preferred_lft forever 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:01.578 192.168.100.9' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:01.578 192.168.100.9' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:01.578 192.168.100.9' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=1237644 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 1237644 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 1237644 ']' 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.578 20:49:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:01.578 [2024-07-26 20:49:49.702657] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:29:01.578 [2024-07-26 20:49:49.702708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.578 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.578 [2024-07-26 20:49:49.787716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:01.578 [2024-07-26 20:49:49.828322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.578 [2024-07-26 20:49:49.828364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.578 [2024-07-26 20:49:49.828374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.578 [2024-07-26 20:49:49.828382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.578 [2024-07-26 20:49:49.828389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.578 [2024-07-26 20:49:49.828437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.578 [2024-07-26 20:49:49.828531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.578 [2024-07-26 20:49:49.828618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.578 [2024-07-26 20:49:49.828620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:02.147 [2024-07-26 20:49:50.601308] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb8bea0/0xb90390) succeed. 00:29:02.147 [2024-07-26 20:49:50.610450] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb8d4e0/0xbd1a20) succeed. 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.147 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.148 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:02.148 Malloc0 00:29:02.148 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.148 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:29:02.148 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.148 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:02.406 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.406 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:02.406 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.406 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:02.406 [2024-07-26 20:49:50.708647] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:02.406 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.406 20:49:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:03.342 Malloc1 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.342 20:49:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.279 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:04.279 Malloc2 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.280 20:49:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:05.653 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:05.654 Malloc3 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.654 20:49:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:29:06.589 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:29:06.589 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:29:06.589 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:29:06.589 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:29:06.589 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:29:06.589 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:29:06.589 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:06.590 Malloc4 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.590 20:49:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.526 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:07.526 Malloc5 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.527 20:49:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:29:08.462 20:49:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:29:08.462 20:49:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:29:08.462 20:49:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:29:08.462 20:49:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:29:08.462 20:49:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:29:08.462 20:49:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:29:08.462 20:49:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:29:08.462 20:49:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:29:08.462 [global] 00:29:08.462 thread=1 00:29:08.462 invalidate=1 00:29:08.462 rw=read 00:29:08.462 time_based=1 00:29:08.462 runtime=10 00:29:08.462 ioengine=libaio 00:29:08.462 direct=1 00:29:08.462 bs=1048576 00:29:08.462 iodepth=128 00:29:08.462 norandommap=1 00:29:08.462 numjobs=13 00:29:08.462 00:29:08.462 [job0] 00:29:08.462 filename=/dev/nvme0n1 00:29:08.462 [job1] 00:29:08.462 filename=/dev/nvme1n1 00:29:08.720 [job2] 00:29:08.720 filename=/dev/nvme2n1 00:29:08.720 [job3] 00:29:08.720 filename=/dev/nvme3n1 00:29:08.720 [job4] 00:29:08.720 filename=/dev/nvme4n1 00:29:08.720 [job5] 00:29:08.720 filename=/dev/nvme5n1 00:29:08.720 Could not set queue depth (nvme0n1) 00:29:08.720 Could not set queue depth (nvme1n1) 00:29:08.720 Could not set queue depth (nvme2n1) 00:29:08.720 Could not set queue depth (nvme3n1) 00:29:08.720 Could not set queue depth (nvme4n1) 00:29:08.720 Could not set queue depth (nvme5n1) 00:29:08.979 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:08.979 ... 00:29:08.979 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:08.979 ... 00:29:08.979 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:08.979 ... 00:29:08.979 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:08.979 ... 00:29:08.979 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:08.979 ... 00:29:08.979 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:08.979 ... 00:29:08.979 fio-3.35 00:29:08.979 Starting 78 threads 00:29:21.222 00:29:21.222 job0: (groupid=0, jobs=1): err= 0: pid=1239242: Fri Jul 26 20:50:08 2024 00:29:21.222 read: IOPS=21, BW=21.3MiB/s (22.4MB/s)(221MiB/10360msec) 00:29:21.222 slat (usec): min=46, max=4256.9k, avg=46711.83, stdev=337988.09 00:29:21.222 clat (msec): min=35, max=8113, avg=4848.15, stdev=3256.98 00:29:21.222 lat (msec): min=715, max=8114, avg=4894.86, stdev=3237.36 00:29:21.222 clat percentiles (msec): 00:29:21.222 | 1.00th=[ 718], 5.00th=[ 743], 10.00th=[ 760], 20.00th=[ 776], 00:29:21.222 | 30.00th=[ 835], 40.00th=[ 2400], 50.00th=[ 7416], 60.00th=[ 7483], 00:29:21.222 | 70.00th=[ 7684], 80.00th=[ 7819], 90.00th=[ 8020], 95.00th=[ 8020], 00:29:21.222 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:29:21.222 | 99.99th=[ 8087] 00:29:21.222 bw ( KiB/s): min= 2048, max=155648, per=0.96%, avg=38092.80, stdev=65928.43, samples=5 00:29:21.222 iops : min= 2, max= 152, avg=37.20, stdev=64.38, samples=5 00:29:21.222 lat (msec) : 50=0.45%, 750=7.69%, 1000=23.98%, 2000=0.45%, >=2000=67.42% 00:29:21.222 cpu : usr=0.01%, sys=0.75%, ctx=311, majf=0, minf=32769 00:29:21.222 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.5%, >=64=71.5% 00:29:21.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.222 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:29:21.222 issued rwts: total=221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.222 job0: (groupid=0, jobs=1): err= 0: pid=1239243: Fri Jul 26 20:50:08 2024 00:29:21.222 read: IOPS=28, BW=28.1MiB/s (29.5MB/s)(289MiB/10288msec) 00:29:21.222 slat (usec): min=106, max=2100.9k, avg=35574.45, stdev=218195.18 00:29:21.222 clat (msec): min=5, max=8851, avg=4207.97, stdev=3366.60 00:29:21.222 lat (msec): min=737, max=8854, avg=4243.55, stdev=3365.46 00:29:21.222 clat percentiles (msec): 00:29:21.222 | 1.00th=[ 735], 5.00th=[ 776], 10.00th=[ 802], 20.00th=[ 961], 00:29:21.222 | 30.00th=[ 1083], 40.00th=[ 1502], 50.00th=[ 2123], 60.00th=[ 6409], 00:29:21.222 | 70.00th=[ 7349], 80.00th=[ 8490], 90.00th=[ 8658], 95.00th=[ 8792], 00:29:21.222 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:29:21.222 | 99.99th=[ 8792] 00:29:21.222 bw ( KiB/s): min= 2048, max=116736, per=1.18%, avg=47077.00, stdev=44081.68, samples=7 00:29:21.222 iops : min= 2, max= 114, avg=45.86, stdev=42.90, samples=7 00:29:21.222 lat (msec) : 10=0.35%, 750=3.11%, 1000=21.11%, 2000=21.80%, >=2000=53.63% 00:29:21.222 cpu : usr=0.03%, sys=1.12%, ctx=571, majf=0, minf=32769 00:29:21.222 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.5%, 32=11.1%, >=64=78.2% 00:29:21.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.222 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:29:21.222 issued rwts: total=289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.222 job0: (groupid=0, jobs=1): err= 0: pid=1239244: Fri Jul 26 20:50:08 2024 00:29:21.222 read: IOPS=25, BW=25.8MiB/s (27.1MB/s)(267MiB/10341msec) 00:29:21.222 slat (usec): min=45, max=2161.1k, avg=38560.10, stdev=245483.07 00:29:21.222 clat (msec): min=43, max=8984, avg=4586.47, stdev=3275.01 00:29:21.222 lat (msec): min=925, max=8987, avg=4625.03, stdev=3272.16 00:29:21.222 clat percentiles (msec): 00:29:21.222 | 1.00th=[ 919], 5.00th=[ 961], 10.00th=[ 1003], 20.00th=[ 1183], 00:29:21.222 | 30.00th=[ 1787], 40.00th=[ 1989], 50.00th=[ 2089], 60.00th=[ 7282], 00:29:21.222 | 70.00th=[ 7282], 80.00th=[ 8557], 90.00th=[ 8792], 95.00th=[ 8926], 00:29:21.222 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:29:21.222 | 99.99th=[ 8926] 00:29:21.222 bw ( KiB/s): min= 4096, max=94208, per=1.19%, avg=47445.33, stdev=38279.89, samples=6 00:29:21.222 iops : min= 4, max= 92, avg=46.33, stdev=37.38, samples=6 00:29:21.222 lat (msec) : 50=0.37%, 1000=9.36%, 2000=34.83%, >=2000=55.43% 00:29:21.222 cpu : usr=0.02%, sys=1.13%, ctx=457, majf=0, minf=32769 00:29:21.222 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.0%, >=64=76.4% 00:29:21.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.222 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:29:21.222 issued rwts: total=267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.222 job0: (groupid=0, jobs=1): err= 0: pid=1239245: Fri Jul 26 20:50:08 2024 00:29:21.222 read: IOPS=3, BW=3646KiB/s (3734kB/s)(37.0MiB/10391msec) 00:29:21.222 slat (usec): min=1164, max=2142.3k, avg=279872.80, stdev=676894.19 00:29:21.222 clat (msec): min=35, max=10389, avg=6139.56, stdev=3819.97 00:29:21.222 lat (msec): min=1897, max=10390, avg=6419.43, stdev=3738.81 00:29:21.222 clat percentiles (msec): 00:29:21.222 | 1.00th=[ 36], 5.00th=[ 1905], 10.00th=[ 1972], 20.00th=[ 2072], 00:29:21.222 | 30.00th=[ 2123], 40.00th=[ 2165], 50.00th=[ 6409], 60.00th=[ 8557], 00:29:21.222 | 70.00th=[10134], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:29:21.222 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.222 | 99.99th=[10402] 00:29:21.222 lat (msec) : 50=2.70%, 2000=13.51%, >=2000=83.78% 00:29:21.222 cpu : usr=0.00%, sys=0.26%, ctx=110, majf=0, minf=9473 00:29:21.222 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:29:21.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.222 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:21.222 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.222 job0: (groupid=0, jobs=1): err= 0: pid=1239246: Fri Jul 26 20:50:08 2024 00:29:21.222 read: IOPS=59, BW=59.9MiB/s (62.9MB/s)(623MiB/10392msec) 00:29:21.222 slat (usec): min=597, max=2042.8k, avg=16670.06, stdev=98616.83 00:29:21.222 clat (usec): min=1196, max=4873.7k, avg=1983569.35, stdev=1149638.62 00:29:21.222 lat (msec): min=1102, max=4883, avg=2000.24, stdev=1148.65 00:29:21.222 clat percentiles (msec): 00:29:21.222 | 1.00th=[ 1133], 5.00th=[ 1150], 10.00th=[ 1167], 20.00th=[ 1234], 00:29:21.222 | 30.00th=[ 1284], 40.00th=[ 1368], 50.00th=[ 1418], 60.00th=[ 1653], 00:29:21.222 | 70.00th=[ 1770], 80.00th=[ 2836], 90.00th=[ 4178], 95.00th=[ 4530], 00:29:21.222 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:29:21.222 | 99.99th=[ 4866] 00:29:21.222 bw ( KiB/s): min= 6144, max=116736, per=2.12%, avg=84480.00, stdev=29615.64, samples=12 00:29:21.222 iops : min= 6, max= 114, avg=82.50, stdev=28.92, samples=12 00:29:21.222 lat (msec) : 2=0.16%, 2000=78.97%, >=2000=20.87% 00:29:21.222 cpu : usr=0.05%, sys=1.68%, ctx=1313, majf=0, minf=32769 00:29:21.222 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:29:21.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.222 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.222 issued rwts: total=623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.222 job0: (groupid=0, jobs=1): err= 0: pid=1239247: Fri Jul 26 20:50:08 2024 00:29:21.222 read: IOPS=125, BW=126MiB/s (132MB/s)(1264MiB/10063msec) 00:29:21.222 slat (usec): min=34, max=735223, avg=7915.34, stdev=28581.13 00:29:21.222 clat (msec): min=51, max=1715, avg=971.00, stdev=358.53 00:29:21.223 lat (msec): min=110, max=1718, avg=978.92, stdev=359.56 00:29:21.223 clat percentiles (msec): 00:29:21.223 | 1.00th=[ 128], 5.00th=[ 409], 10.00th=[ 600], 20.00th=[ 709], 00:29:21.223 | 30.00th=[ 818], 40.00th=[ 860], 50.00th=[ 902], 60.00th=[ 944], 00:29:21.223 | 70.00th=[ 1020], 80.00th=[ 1368], 90.00th=[ 1485], 95.00th=[ 1620], 00:29:21.223 | 99.00th=[ 1670], 99.50th=[ 1687], 99.90th=[ 1703], 99.95th=[ 1720], 00:29:21.223 | 99.99th=[ 1720] 00:29:21.223 bw ( KiB/s): min=22528, max=219136, per=3.08%, avg=122490.58, stdev=54734.96, samples=19 00:29:21.223 iops : min= 22, max= 214, avg=119.53, stdev=53.44, samples=19 00:29:21.223 lat (msec) : 100=0.08%, 250=2.37%, 500=4.83%, 750=16.06%, 1000=45.89% 00:29:21.223 lat (msec) : 2000=30.78% 00:29:21.223 cpu : usr=0.05%, sys=1.84%, ctx=1474, majf=0, minf=32769 00:29:21.223 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:29:21.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.223 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.223 issued rwts: total=1264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.223 job0: (groupid=0, jobs=1): err= 0: pid=1239248: Fri Jul 26 20:50:08 2024 00:29:21.223 read: IOPS=85, BW=85.2MiB/s (89.4MB/s)(888MiB/10417msec) 00:29:21.223 slat (usec): min=43, max=2107.1k, avg=11722.83, stdev=103560.93 00:29:21.223 clat (usec): min=1350, max=6364.2k, avg=1439980.95, stdev=1084430.85 00:29:21.223 lat (msec): min=513, max=6386, avg=1451.70, stdev=1085.90 00:29:21.223 clat percentiles (msec): 00:29:21.223 | 1.00th=[ 518], 5.00th=[ 550], 10.00th=[ 609], 20.00th=[ 709], 00:29:21.223 | 30.00th=[ 768], 40.00th=[ 810], 50.00th=[ 844], 60.00th=[ 885], 00:29:21.223 | 70.00th=[ 944], 80.00th=[ 2836], 90.00th=[ 3205], 95.00th=[ 3507], 00:29:21.223 | 99.00th=[ 3742], 99.50th=[ 3809], 99.90th=[ 6342], 99.95th=[ 6342], 00:29:21.223 | 99.99th=[ 6342] 00:29:21.223 bw ( KiB/s): min=16384, max=210522, per=3.26%, avg=129671.50, stdev=63881.24, samples=12 00:29:21.223 iops : min= 16, max= 205, avg=126.58, stdev=62.32, samples=12 00:29:21.223 lat (msec) : 2=0.11%, 750=26.69%, 1000=43.58%, 2000=1.13%, >=2000=28.49% 00:29:21.223 cpu : usr=0.07%, sys=1.96%, ctx=801, majf=0, minf=32769 00:29:21.223 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:29:21.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.223 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.223 issued rwts: total=888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.223 job0: (groupid=0, jobs=1): err= 0: pid=1239249: Fri Jul 26 20:50:08 2024 00:29:21.223 read: IOPS=196, BW=197MiB/s (206MB/s)(1976MiB/10040msec) 00:29:21.223 slat (usec): min=44, max=104246, avg=5056.95, stdev=6741.68 00:29:21.223 clat (msec): min=34, max=1003, avg=612.51, stdev=181.37 00:29:21.223 lat (msec): min=39, max=1006, avg=617.57, stdev=182.44 00:29:21.223 clat percentiles (msec): 00:29:21.223 | 1.00th=[ 96], 5.00th=[ 376], 10.00th=[ 422], 20.00th=[ 485], 00:29:21.223 | 30.00th=[ 518], 40.00th=[ 527], 50.00th=[ 592], 60.00th=[ 625], 00:29:21.223 | 70.00th=[ 709], 80.00th=[ 785], 90.00th=[ 885], 95.00th=[ 911], 00:29:21.223 | 99.00th=[ 978], 99.50th=[ 986], 99.90th=[ 1003], 99.95th=[ 1003], 00:29:21.223 | 99.99th=[ 1003] 00:29:21.223 bw ( KiB/s): min=34816, max=299008, per=5.00%, avg=199078.63, stdev=65514.62, samples=19 00:29:21.223 iops : min= 34, max= 292, avg=194.37, stdev=63.93, samples=19 00:29:21.223 lat (msec) : 50=0.20%, 100=0.81%, 250=2.13%, 500=19.94%, 750=53.54% 00:29:21.223 lat (msec) : 1000=23.28%, 2000=0.10% 00:29:21.223 cpu : usr=0.13%, sys=2.52%, ctx=3989, majf=0, minf=32769 00:29:21.223 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:29:21.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.223 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.223 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.223 job0: (groupid=0, jobs=1): err= 0: pid=1239250: Fri Jul 26 20:50:08 2024 00:29:21.223 read: IOPS=154, BW=155MiB/s (162MB/s)(1555MiB/10052msec) 00:29:21.223 slat (usec): min=42, max=97936, avg=6427.59, stdev=11014.48 00:29:21.223 clat (msec): min=46, max=1471, avg=768.70, stdev=381.75 00:29:21.223 lat (msec): min=51, max=1481, avg=775.13, stdev=384.57 00:29:21.223 clat percentiles (msec): 00:29:21.223 | 1.00th=[ 99], 5.00th=[ 288], 10.00th=[ 384], 20.00th=[ 388], 00:29:21.223 | 30.00th=[ 414], 40.00th=[ 550], 50.00th=[ 651], 60.00th=[ 961], 00:29:21.223 | 70.00th=[ 1036], 80.00th=[ 1234], 90.00th=[ 1334], 95.00th=[ 1351], 00:29:21.223 | 99.00th=[ 1418], 99.50th=[ 1452], 99.90th=[ 1469], 99.95th=[ 1469], 00:29:21.223 | 99.99th=[ 1469] 00:29:21.223 bw ( KiB/s): min=32702, max=337920, per=4.08%, avg=162431.67, stdev=89923.08, samples=18 00:29:21.223 iops : min= 31, max= 330, avg=158.50, stdev=87.91, samples=18 00:29:21.223 lat (msec) : 50=0.06%, 100=0.96%, 250=3.15%, 500=32.35%, 750=17.04% 00:29:21.223 lat (msec) : 1000=14.02%, 2000=32.41% 00:29:21.223 cpu : usr=0.08%, sys=2.46%, ctx=2053, majf=0, minf=32769 00:29:21.223 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:29:21.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.223 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.223 issued rwts: total=1555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.223 job0: (groupid=0, jobs=1): err= 0: pid=1239251: Fri Jul 26 20:50:08 2024 00:29:21.223 read: IOPS=21, BW=21.6MiB/s (22.6MB/s)(225MiB/10440msec) 00:29:21.223 slat (usec): min=898, max=2139.7k, avg=46219.79, stdev=237340.80 00:29:21.223 clat (msec): min=38, max=8563, avg=3892.74, stdev=1980.46 00:29:21.223 lat (msec): min=952, max=10352, avg=3938.96, stdev=2007.58 00:29:21.223 clat percentiles (msec): 00:29:21.223 | 1.00th=[ 961], 5.00th=[ 1099], 10.00th=[ 1183], 20.00th=[ 1469], 00:29:21.223 | 30.00th=[ 1905], 40.00th=[ 4463], 50.00th=[ 4463], 60.00th=[ 4530], 00:29:21.223 | 70.00th=[ 5000], 80.00th=[ 5269], 90.00th=[ 6208], 95.00th=[ 6409], 00:29:21.223 | 99.00th=[ 8423], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:29:21.223 | 99.99th=[ 8557] 00:29:21.223 bw ( KiB/s): min= 2048, max=79872, per=0.83%, avg=33109.33, stdev=29349.11, samples=6 00:29:21.223 iops : min= 2, max= 78, avg=32.33, stdev=28.66, samples=6 00:29:21.223 lat (msec) : 50=0.44%, 1000=0.89%, 2000=30.67%, >=2000=68.00% 00:29:21.223 cpu : usr=0.02%, sys=1.03%, ctx=592, majf=0, minf=32769 00:29:21.223 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.2%, >=64=72.0% 00:29:21.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.223 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:29:21.223 issued rwts: total=225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.223 job0: (groupid=0, jobs=1): err= 0: pid=1239252: Fri Jul 26 20:50:08 2024 00:29:21.223 read: IOPS=218, BW=219MiB/s (230MB/s)(2194MiB/10023msec) 00:29:21.223 slat (usec): min=63, max=102388, avg=4554.56, stdev=7067.78 00:29:21.223 clat (msec): min=17, max=996, avg=551.21, stdev=156.61 00:29:21.223 lat (msec): min=25, max=1011, avg=555.76, stdev=157.51 00:29:21.223 clat percentiles (msec): 00:29:21.223 | 1.00th=[ 104], 5.00th=[ 401], 10.00th=[ 414], 20.00th=[ 430], 00:29:21.223 | 30.00th=[ 464], 40.00th=[ 477], 50.00th=[ 502], 60.00th=[ 592], 00:29:21.223 | 70.00th=[ 634], 80.00th=[ 659], 90.00th=[ 743], 95.00th=[ 852], 00:29:21.223 | 99.00th=[ 978], 99.50th=[ 978], 99.90th=[ 995], 99.95th=[ 995], 00:29:21.223 | 99.99th=[ 995] 00:29:21.223 bw ( KiB/s): min=45056, max=313344, per=5.55%, avg=221184.00, stdev=69981.90, samples=18 00:29:21.223 iops : min= 44, max= 306, avg=216.00, stdev=68.34, samples=18 00:29:21.223 lat (msec) : 20=0.05%, 50=0.46%, 100=0.46%, 250=1.60%, 500=46.35% 00:29:21.223 lat (msec) : 750=41.61%, 1000=9.48% 00:29:21.223 cpu : usr=0.13%, sys=2.48%, ctx=3909, majf=0, minf=32769 00:29:21.223 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:29:21.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.223 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.223 job0: (groupid=0, jobs=1): err= 0: pid=1239253: Fri Jul 26 20:50:08 2024 00:29:21.223 read: IOPS=30, BW=30.1MiB/s (31.6MB/s)(310MiB/10295msec) 00:29:21.223 slat (usec): min=50, max=2156.1k, avg=33149.98, stdev=209919.46 00:29:21.223 clat (msec): min=16, max=7482, avg=3512.79, stdev=2654.27 00:29:21.223 lat (msec): min=880, max=7485, avg=3545.94, stdev=2650.64 00:29:21.223 clat percentiles (msec): 00:29:21.223 | 1.00th=[ 877], 5.00th=[ 885], 10.00th=[ 885], 20.00th=[ 894], 00:29:21.223 | 30.00th=[ 953], 40.00th=[ 1754], 50.00th=[ 2366], 60.00th=[ 4144], 00:29:21.223 | 70.00th=[ 5738], 80.00th=[ 7080], 90.00th=[ 7282], 95.00th=[ 7349], 00:29:21.223 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:29:21.223 | 99.99th=[ 7483] 00:29:21.223 bw ( KiB/s): min= 2048, max=124928, per=1.34%, avg=53248.00, stdev=48088.04, samples=7 00:29:21.223 iops : min= 2, max= 122, avg=52.00, stdev=46.96, samples=7 00:29:21.223 lat (msec) : 20=0.32%, 1000=37.42%, 2000=8.71%, >=2000=53.55% 00:29:21.223 cpu : usr=0.02%, sys=1.15%, ctx=336, majf=0, minf=32769 00:29:21.223 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.3%, >=64=79.7% 00:29:21.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.223 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:29:21.223 issued rwts: total=310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.224 job0: (groupid=0, jobs=1): err= 0: pid=1239254: Fri Jul 26 20:50:08 2024 00:29:21.224 read: IOPS=3, BW=3353KiB/s (3433kB/s)(34.0MiB/10384msec) 00:29:21.224 slat (usec): min=1146, max=4199.8k, avg=304930.18, stdev=888064.50 00:29:21.224 clat (msec): min=16, max=10370, avg=7678.02, stdev=2498.61 00:29:21.224 lat (msec): min=4216, max=10383, avg=7982.95, stdev=2142.49 00:29:21.224 clat percentiles (msec): 00:29:21.224 | 1.00th=[ 17], 5.00th=[ 4212], 10.00th=[ 6208], 20.00th=[ 6208], 00:29:21.224 | 30.00th=[ 6275], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8557], 00:29:21.224 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:29:21.224 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.224 | 99.99th=[10402] 00:29:21.224 lat (msec) : 20=2.94%, >=2000=97.06% 00:29:21.224 cpu : usr=0.01%, sys=0.22%, ctx=89, majf=0, minf=8705 00:29:21.224 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:29:21.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.224 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:21.224 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.224 job1: (groupid=0, jobs=1): err= 0: pid=1239255: Fri Jul 26 20:50:08 2024 00:29:21.224 read: IOPS=4, BW=4496KiB/s (4603kB/s)(46.0MiB/10478msec) 00:29:21.224 slat (usec): min=919, max=2127.5k, avg=226831.27, stdev=635873.97 00:29:21.224 clat (msec): min=42, max=10475, avg=8982.35, stdev=2691.47 00:29:21.224 lat (msec): min=2139, max=10477, avg=9209.18, stdev=2337.77 00:29:21.224 clat percentiles (msec): 00:29:21.224 | 1.00th=[ 43], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6477], 00:29:21.224 | 30.00th=[10268], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:29:21.224 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:29:21.224 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:29:21.224 | 99.99th=[10537] 00:29:21.224 lat (msec) : 50=2.17%, >=2000=97.83% 00:29:21.224 cpu : usr=0.00%, sys=0.48%, ctx=104, majf=0, minf=11777 00:29:21.224 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:29:21.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.224 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:21.224 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.224 job1: (groupid=0, jobs=1): err= 0: pid=1239256: Fri Jul 26 20:50:08 2024 00:29:21.224 read: IOPS=58, BW=58.3MiB/s (61.2MB/s)(605MiB/10373msec) 00:29:21.224 slat (usec): min=46, max=2106.1k, avg=17045.06, stdev=126033.89 00:29:21.224 clat (msec): min=55, max=4913, avg=1484.44, stdev=1203.34 00:29:21.224 lat (msec): min=635, max=4959, avg=1501.48, stdev=1215.42 00:29:21.224 clat percentiles (msec): 00:29:21.224 | 1.00th=[ 634], 5.00th=[ 634], 10.00th=[ 651], 20.00th=[ 701], 00:29:21.224 | 30.00th=[ 751], 40.00th=[ 768], 50.00th=[ 776], 60.00th=[ 852], 00:29:21.224 | 70.00th=[ 1167], 80.00th=[ 2903], 90.00th=[ 3272], 95.00th=[ 4279], 00:29:21.224 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:29:21.224 | 99.99th=[ 4933] 00:29:21.224 bw ( KiB/s): min= 4096, max=208896, per=3.07%, avg=122112.00, stdev=77800.42, samples=8 00:29:21.224 iops : min= 4, max= 204, avg=119.25, stdev=75.98, samples=8 00:29:21.224 lat (msec) : 100=0.17%, 750=29.59%, 1000=35.04%, 2000=7.93%, >=2000=27.27% 00:29:21.224 cpu : usr=0.06%, sys=1.60%, ctx=801, majf=0, minf=32769 00:29:21.224 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:29:21.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.224 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.224 issued rwts: total=605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.224 job1: (groupid=0, jobs=1): err= 0: pid=1239257: Fri Jul 26 20:50:08 2024 00:29:21.224 read: IOPS=129, BW=129MiB/s (136MB/s)(1295MiB/10015msec) 00:29:21.224 slat (usec): min=42, max=2078.9k, avg=7715.75, stdev=74260.16 00:29:21.224 clat (msec): min=13, max=5172, avg=592.89, stdev=550.09 00:29:21.224 lat (msec): min=15, max=5176, avg=600.61, stdev=569.22 00:29:21.224 clat percentiles (msec): 00:29:21.224 | 1.00th=[ 34], 5.00th=[ 203], 10.00th=[ 384], 20.00th=[ 388], 00:29:21.224 | 30.00th=[ 393], 40.00th=[ 401], 50.00th=[ 531], 60.00th=[ 592], 00:29:21.224 | 70.00th=[ 642], 80.00th=[ 659], 90.00th=[ 718], 95.00th=[ 1150], 00:29:21.224 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:29:21.224 | 99.99th=[ 5201] 00:29:21.224 bw ( KiB/s): min=100352, max=333824, per=5.74%, avg=228693.33, stdev=71767.72, samples=9 00:29:21.224 iops : min= 98, max= 326, avg=223.33, stdev=70.09, samples=9 00:29:21.224 lat (msec) : 20=0.23%, 50=1.08%, 100=1.24%, 250=3.55%, 500=40.15% 00:29:21.224 lat (msec) : 750=44.02%, 1000=2.70%, 2000=5.71%, >=2000=1.31% 00:29:21.224 cpu : usr=0.12%, sys=1.97%, ctx=1403, majf=0, minf=32769 00:29:21.224 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:29:21.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.224 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.224 issued rwts: total=1295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.224 job1: (groupid=0, jobs=1): err= 0: pid=1239258: Fri Jul 26 20:50:08 2024 00:29:21.224 read: IOPS=61, BW=61.9MiB/s (64.9MB/s)(647MiB/10457msec) 00:29:21.224 slat (usec): min=33, max=2150.1k, avg=16071.51, stdev=121248.58 00:29:21.224 clat (msec): min=54, max=6113, avg=1939.86, stdev=1689.98 00:29:21.224 lat (msec): min=795, max=6123, avg=1955.93, stdev=1693.41 00:29:21.224 clat percentiles (msec): 00:29:21.224 | 1.00th=[ 793], 5.00th=[ 818], 10.00th=[ 827], 20.00th=[ 852], 00:29:21.224 | 30.00th=[ 944], 40.00th=[ 1116], 50.00th=[ 1150], 60.00th=[ 1183], 00:29:21.224 | 70.00th=[ 1569], 80.00th=[ 2140], 90.00th=[ 5269], 95.00th=[ 5671], 00:29:21.224 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6141], 99.95th=[ 6141], 00:29:21.224 | 99.99th=[ 6141] 00:29:21.224 bw ( KiB/s): min= 2048, max=157696, per=2.22%, avg=88576.00, stdev=56446.80, samples=12 00:29:21.224 iops : min= 2, max= 154, avg=86.50, stdev=55.12, samples=12 00:29:21.224 lat (msec) : 100=0.15%, 1000=31.99%, 2000=47.14%, >=2000=20.71% 00:29:21.224 cpu : usr=0.03%, sys=1.76%, ctx=870, majf=0, minf=32769 00:29:21.224 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:29:21.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.224 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.224 issued rwts: total=647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.224 job1: (groupid=0, jobs=1): err= 0: pid=1239259: Fri Jul 26 20:50:08 2024 00:29:21.224 read: IOPS=95, BW=95.2MiB/s (99.9MB/s)(1000MiB/10499msec) 00:29:21.224 slat (usec): min=44, max=2020.2k, avg=10447.02, stdev=71566.44 00:29:21.224 clat (msec): min=47, max=4270, avg=1279.49, stdev=717.97 00:29:21.224 lat (msec): min=512, max=5235, avg=1289.93, stdev=722.74 00:29:21.224 clat percentiles (msec): 00:29:21.224 | 1.00th=[ 514], 5.00th=[ 558], 10.00th=[ 651], 20.00th=[ 768], 00:29:21.224 | 30.00th=[ 827], 40.00th=[ 885], 50.00th=[ 919], 60.00th=[ 1083], 00:29:21.224 | 70.00th=[ 1200], 80.00th=[ 2265], 90.00th=[ 2366], 95.00th=[ 2668], 00:29:21.224 | 99.00th=[ 2970], 99.50th=[ 3037], 99.90th=[ 4279], 99.95th=[ 4279], 00:29:21.224 | 99.99th=[ 4279] 00:29:21.224 bw ( KiB/s): min=40960, max=215040, per=3.20%, avg=127534.21, stdev=49784.48, samples=14 00:29:21.224 iops : min= 40, max= 210, avg=124.50, stdev=48.56, samples=14 00:29:21.224 lat (msec) : 50=0.10%, 750=17.30%, 1000=37.80%, 2000=19.40%, >=2000=25.40% 00:29:21.224 cpu : usr=0.03%, sys=1.66%, ctx=1303, majf=0, minf=32231 00:29:21.224 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:29:21.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.224 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.224 issued rwts: total=1000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.224 job1: (groupid=0, jobs=1): err= 0: pid=1239260: Fri Jul 26 20:50:08 2024 00:29:21.224 read: IOPS=6, BW=7033KiB/s (7202kB/s)(71.0MiB/10337msec) 00:29:21.224 slat (usec): min=1496, max=2098.1k, avg=144740.73, stdev=494632.81 00:29:21.224 clat (msec): min=59, max=10333, avg=8609.16, stdev=2626.83 00:29:21.224 lat (msec): min=2130, max=10336, avg=8753.90, stdev=2424.33 00:29:21.224 clat percentiles (msec): 00:29:21.224 | 1.00th=[ 60], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 6477], 00:29:21.224 | 30.00th=[ 9731], 40.00th=[ 9731], 50.00th=[ 9866], 60.00th=[10000], 00:29:21.224 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:29:21.224 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:29:21.224 | 99.99th=[10268] 00:29:21.224 lat (msec) : 100=1.41%, >=2000=98.59% 00:29:21.224 cpu : usr=0.00%, sys=0.75%, ctx=139, majf=0, minf=18177 00:29:21.224 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:29:21.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.224 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:21.224 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.225 job1: (groupid=0, jobs=1): err= 0: pid=1239261: Fri Jul 26 20:50:08 2024 00:29:21.225 read: IOPS=4, BW=5032KiB/s (5152kB/s)(51.0MiB/10379msec) 00:29:21.225 slat (usec): min=919, max=2109.7k, avg=202572.83, stdev=572881.36 00:29:21.225 clat (msec): min=47, max=10364, avg=7155.48, stdev=3343.09 00:29:21.225 lat (msec): min=2116, max=10378, avg=7358.05, stdev=3214.27 00:29:21.225 clat percentiles (msec): 00:29:21.225 | 1.00th=[ 47], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:29:21.225 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 9597], 60.00th=[ 9731], 00:29:21.225 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10268], 00:29:21.225 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.225 | 99.99th=[10402] 00:29:21.225 lat (msec) : 50=1.96%, >=2000=98.04% 00:29:21.225 cpu : usr=0.00%, sys=0.42%, ctx=186, majf=0, minf=13057 00:29:21.225 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:29:21.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.225 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:21.225 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.225 job1: (groupid=0, jobs=1): err= 0: pid=1239262: Fri Jul 26 20:50:08 2024 00:29:21.225 read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(275MiB/10375msec) 00:29:21.225 slat (usec): min=45, max=2083.8k, avg=37584.36, stdev=238750.08 00:29:21.225 clat (msec): min=37, max=6416, avg=2995.41, stdev=2296.24 00:29:21.225 lat (msec): min=599, max=8500, avg=3033.00, stdev=2309.46 00:29:21.225 clat percentiles (msec): 00:29:21.225 | 1.00th=[ 600], 5.00th=[ 600], 10.00th=[ 600], 20.00th=[ 600], 00:29:21.225 | 30.00th=[ 625], 40.00th=[ 659], 50.00th=[ 2735], 60.00th=[ 5201], 00:29:21.225 | 70.00th=[ 5336], 80.00th=[ 5470], 90.00th=[ 5604], 95.00th=[ 5671], 00:29:21.225 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 6409], 99.95th=[ 6409], 00:29:21.225 | 99.99th=[ 6409] 00:29:21.225 bw ( KiB/s): min=12288, max=206848, per=1.89%, avg=75264.00, stdev=90643.76, samples=4 00:29:21.225 iops : min= 12, max= 202, avg=73.50, stdev=88.52, samples=4 00:29:21.225 lat (msec) : 50=0.36%, 750=43.64%, 2000=3.27%, >=2000=52.73% 00:29:21.225 cpu : usr=0.00%, sys=0.99%, ctx=289, majf=0, minf=32769 00:29:21.225 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.1% 00:29:21.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.225 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:29:21.225 issued rwts: total=275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.225 job1: (groupid=0, jobs=1): err= 0: pid=1239263: Fri Jul 26 20:50:08 2024 00:29:21.225 read: IOPS=52, BW=52.3MiB/s (54.9MB/s)(541MiB/10341msec) 00:29:21.225 slat (usec): min=45, max=2170.7k, avg=19008.74, stdev=151021.63 00:29:21.225 clat (msec): min=54, max=8579, avg=1947.27, stdev=1970.75 00:29:21.225 lat (msec): min=580, max=8595, avg=1966.28, stdev=1978.89 00:29:21.225 clat percentiles (msec): 00:29:21.225 | 1.00th=[ 584], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 642], 00:29:21.225 | 30.00th=[ 642], 40.00th=[ 684], 50.00th=[ 760], 60.00th=[ 1217], 00:29:21.225 | 70.00th=[ 1469], 80.00th=[ 5000], 90.00th=[ 5403], 95.00th=[ 5805], 00:29:21.225 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 8557], 99.95th=[ 8557], 00:29:21.225 | 99.99th=[ 8557] 00:29:21.225 bw ( KiB/s): min= 2048, max=210944, per=2.36%, avg=93980.44, stdev=79268.74, samples=9 00:29:21.225 iops : min= 2, max= 206, avg=91.78, stdev=77.41, samples=9 00:29:21.225 lat (msec) : 100=0.18%, 750=49.35%, 1000=5.91%, 2000=20.15%, >=2000=24.40% 00:29:21.225 cpu : usr=0.00%, sys=1.13%, ctx=666, majf=0, minf=32769 00:29:21.225 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.4% 00:29:21.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.225 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.225 issued rwts: total=541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.225 job1: (groupid=0, jobs=1): err= 0: pid=1239264: Fri Jul 26 20:50:08 2024 00:29:21.225 read: IOPS=2, BW=2744KiB/s (2810kB/s)(28.0MiB/10449msec) 00:29:21.225 slat (usec): min=1047, max=2114.6k, avg=371821.39, stdev=787818.51 00:29:21.225 clat (msec): min=37, max=10446, avg=8013.40, stdev=3204.25 00:29:21.225 lat (msec): min=2125, max=10448, avg=8385.22, stdev=2826.22 00:29:21.225 clat percentiles (msec): 00:29:21.225 | 1.00th=[ 38], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4329], 00:29:21.225 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10268], 60.00th=[10402], 00:29:21.225 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:29:21.225 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.225 | 99.99th=[10402] 00:29:21.225 lat (msec) : 50=3.57%, >=2000=96.43% 00:29:21.225 cpu : usr=0.00%, sys=0.23%, ctx=87, majf=0, minf=7169 00:29:21.225 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:29:21.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.225 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:29:21.225 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.225 job1: (groupid=0, jobs=1): err= 0: pid=1239265: Fri Jul 26 20:50:08 2024 00:29:21.225 read: IOPS=5, BW=5214KiB/s (5339kB/s)(53.0MiB/10409msec) 00:29:21.225 slat (usec): min=879, max=2098.1k, avg=195251.39, stdev=587831.91 00:29:21.225 clat (msec): min=59, max=10404, avg=8052.95, stdev=3041.84 00:29:21.225 lat (msec): min=2124, max=10408, avg=8248.20, stdev=2844.65 00:29:21.225 clat percentiles (msec): 00:29:21.225 | 1.00th=[ 61], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:29:21.225 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10268], 60.00th=[10268], 00:29:21.225 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:29:21.225 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.225 | 99.99th=[10402] 00:29:21.225 lat (msec) : 100=1.89%, >=2000=98.11% 00:29:21.225 cpu : usr=0.00%, sys=0.53%, ctx=77, majf=0, minf=13569 00:29:21.225 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:29:21.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.225 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:21.225 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.225 job1: (groupid=0, jobs=1): err= 0: pid=1239266: Fri Jul 26 20:50:08 2024 00:29:21.225 read: IOPS=50, BW=50.4MiB/s (52.9MB/s)(508MiB/10076msec) 00:29:21.225 slat (usec): min=43, max=2148.2k, avg=19686.49, stdev=152721.13 00:29:21.225 clat (msec): min=72, max=7653, avg=1071.23, stdev=1292.38 00:29:21.225 lat (msec): min=76, max=7672, avg=1090.92, stdev=1330.04 00:29:21.225 clat percentiles (msec): 00:29:21.225 | 1.00th=[ 83], 5.00th=[ 161], 10.00th=[ 300], 20.00th=[ 527], 00:29:21.225 | 30.00th=[ 592], 40.00th=[ 600], 50.00th=[ 625], 60.00th=[ 667], 00:29:21.225 | 70.00th=[ 986], 80.00th=[ 1368], 90.00th=[ 1888], 95.00th=[ 3876], 00:29:21.225 | 99.00th=[ 7617], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:29:21.225 | 99.99th=[ 7684] 00:29:21.225 bw ( KiB/s): min=61440, max=221184, per=3.89%, avg=154886.00, stdev=72252.54, samples=5 00:29:21.225 iops : min= 60, max= 216, avg=151.20, stdev=70.51, samples=5 00:29:21.225 lat (msec) : 100=2.56%, 250=5.91%, 500=9.25%, 750=47.64%, 1000=5.31% 00:29:21.225 lat (msec) : 2000=23.23%, >=2000=6.10% 00:29:21.225 cpu : usr=0.00%, sys=0.91%, ctx=760, majf=0, minf=32769 00:29:21.225 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:29:21.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.225 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:21.225 issued rwts: total=508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.225 job1: (groupid=0, jobs=1): err= 0: pid=1239267: Fri Jul 26 20:50:08 2024 00:29:21.225 read: IOPS=5, BW=6051KiB/s (6196kB/s)(62.0MiB/10493msec) 00:29:21.225 slat (usec): min=752, max=2119.9k, avg=168344.46, stdev=557229.15 00:29:21.225 clat (msec): min=54, max=10489, avg=9099.47, stdev=2518.29 00:29:21.225 lat (msec): min=2147, max=10492, avg=9267.82, stdev=2236.89 00:29:21.225 clat percentiles (msec): 00:29:21.225 | 1.00th=[ 55], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 8557], 00:29:21.225 | 30.00th=[10268], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:29:21.225 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:29:21.225 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:29:21.225 | 99.99th=[10537] 00:29:21.225 lat (msec) : 100=1.61%, >=2000=98.39% 00:29:21.225 cpu : usr=0.00%, sys=0.65%, ctx=84, majf=0, minf=15873 00:29:21.225 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:29:21.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.225 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:21.225 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.225 job2: (groupid=0, jobs=1): err= 0: pid=1239268: Fri Jul 26 20:50:08 2024 00:29:21.225 read: IOPS=16, BW=16.9MiB/s (17.7MB/s)(176MiB/10425msec) 00:29:21.225 slat (usec): min=75, max=4218.5k, avg=58884.81, stdev=378523.96 00:29:21.226 clat (msec): min=59, max=9703, avg=7023.64, stdev=3211.09 00:29:21.226 lat (msec): min=1264, max=9735, avg=7082.53, stdev=3172.74 00:29:21.226 clat percentiles (msec): 00:29:21.226 | 1.00th=[ 1267], 5.00th=[ 1401], 10.00th=[ 1569], 20.00th=[ 2106], 00:29:21.226 | 30.00th=[ 5873], 40.00th=[ 7886], 50.00th=[ 8926], 60.00th=[ 9329], 00:29:21.226 | 70.00th=[ 9463], 80.00th=[ 9463], 90.00th=[ 9597], 95.00th=[ 9597], 00:29:21.226 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:29:21.226 | 99.99th=[ 9731] 00:29:21.226 bw ( KiB/s): min= 4096, max=34816, per=0.49%, avg=19660.80, stdev=11085.72, samples=5 00:29:21.226 iops : min= 4, max= 34, avg=19.20, stdev=10.83, samples=5 00:29:21.226 lat (msec) : 100=0.57%, 2000=15.34%, >=2000=84.09% 00:29:21.226 cpu : usr=0.03%, sys=0.90%, ctx=637, majf=0, minf=32769 00:29:21.226 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.5%, 16=9.1%, 32=18.2%, >=64=64.2% 00:29:21.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.226 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:29:21.226 issued rwts: total=176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.226 job2: (groupid=0, jobs=1): err= 0: pid=1239269: Fri Jul 26 20:50:08 2024 00:29:21.226 read: IOPS=20, BW=20.9MiB/s (21.9MB/s)(217MiB/10379msec) 00:29:21.226 slat (usec): min=59, max=2092.4k, avg=47588.58, stdev=242061.01 00:29:21.226 clat (msec): min=50, max=10186, avg=5635.09, stdev=2826.77 00:29:21.226 lat (msec): min=1849, max=10187, avg=5682.67, stdev=2807.81 00:29:21.226 clat percentiles (msec): 00:29:21.226 | 1.00th=[ 1838], 5.00th=[ 1972], 10.00th=[ 2165], 20.00th=[ 2333], 00:29:21.226 | 30.00th=[ 2635], 40.00th=[ 4245], 50.00th=[ 7013], 60.00th=[ 7349], 00:29:21.226 | 70.00th=[ 7752], 80.00th=[ 8356], 90.00th=[ 8926], 95.00th=[ 9194], 00:29:21.226 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:29:21.226 | 99.99th=[10134] 00:29:21.226 bw ( KiB/s): min= 6144, max=63488, per=0.65%, avg=26038.86, stdev=23431.91, samples=7 00:29:21.226 iops : min= 6, max= 62, avg=25.43, stdev=22.88, samples=7 00:29:21.226 lat (msec) : 100=0.46%, 2000=5.53%, >=2000=94.01% 00:29:21.226 cpu : usr=0.02%, sys=1.00%, ctx=704, majf=0, minf=32769 00:29:21.226 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.4%, 32=14.7%, >=64=71.0% 00:29:21.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.226 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:29:21.226 issued rwts: total=217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.226 job2: (groupid=0, jobs=1): err= 0: pid=1239270: Fri Jul 26 20:50:08 2024 00:29:21.226 read: IOPS=6, BW=6691KiB/s (6851kB/s)(68.0MiB/10407msec) 00:29:21.226 slat (usec): min=615, max=2092.0k, avg=152040.75, stdev=523021.64 00:29:21.226 clat (msec): min=67, max=10403, avg=7119.55, stdev=3241.16 00:29:21.226 lat (msec): min=2123, max=10406, avg=7271.59, stdev=3146.51 00:29:21.226 clat percentiles (msec): 00:29:21.226 | 1.00th=[ 67], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4279], 00:29:21.226 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:29:21.226 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:29:21.226 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.226 | 99.99th=[10402] 00:29:21.226 lat (msec) : 100=1.47%, >=2000=98.53% 00:29:21.226 cpu : usr=0.00%, sys=0.62%, ctx=74, majf=0, minf=17409 00:29:21.226 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:29:21.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.226 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:21.226 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.226 job2: (groupid=0, jobs=1): err= 0: pid=1239271: Fri Jul 26 20:50:08 2024 00:29:21.226 read: IOPS=155, BW=156MiB/s (163MB/s)(1566MiB/10054msec) 00:29:21.226 slat (usec): min=60, max=1428.4k, avg=6383.62, stdev=38378.97 00:29:21.226 clat (msec): min=49, max=2472, avg=721.27, stdev=366.22 00:29:21.226 lat (msec): min=59, max=2476, avg=727.66, stdev=369.19 00:29:21.226 clat percentiles (msec): 00:29:21.226 | 1.00th=[ 96], 5.00th=[ 292], 10.00th=[ 405], 20.00th=[ 625], 00:29:21.226 | 30.00th=[ 634], 40.00th=[ 651], 50.00th=[ 684], 60.00th=[ 709], 00:29:21.226 | 70.00th=[ 751], 80.00th=[ 776], 90.00th=[ 885], 95.00th=[ 969], 00:29:21.226 | 99.00th=[ 2400], 99.50th=[ 2433], 99.90th=[ 2467], 99.95th=[ 2467], 00:29:21.226 | 99.99th=[ 2467] 00:29:21.226 bw ( KiB/s): min=51200, max=319488, per=4.62%, avg=184192.00, stdev=51931.88, samples=16 00:29:21.226 iops : min= 50, max= 312, avg=179.87, stdev=50.71, samples=16 00:29:21.226 lat (msec) : 50=0.06%, 100=0.96%, 250=3.13%, 500=9.64%, 750=55.87% 00:29:21.226 lat (msec) : 1000=26.50%, >=2000=3.83% 00:29:21.226 cpu : usr=0.07%, sys=2.76%, ctx=1393, majf=0, minf=32769 00:29:21.226 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:29:21.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.226 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.226 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.226 job2: (groupid=0, jobs=1): err= 0: pid=1239272: Fri Jul 26 20:50:08 2024 00:29:21.226 read: IOPS=35, BW=35.5MiB/s (37.2MB/s)(371MiB/10445msec) 00:29:21.226 slat (usec): min=136, max=2118.5k, avg=28097.68, stdev=206482.10 00:29:21.226 clat (msec): min=18, max=9155, avg=3462.69, stdev=3583.18 00:29:21.226 lat (msec): min=622, max=9158, avg=3490.79, stdev=3588.93 00:29:21.226 clat percentiles (msec): 00:29:21.226 | 1.00th=[ 617], 5.00th=[ 634], 10.00th=[ 642], 20.00th=[ 659], 00:29:21.226 | 30.00th=[ 684], 40.00th=[ 726], 50.00th=[ 776], 60.00th=[ 1972], 00:29:21.226 | 70.00th=[ 7215], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:29:21.226 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:29:21.226 | 99.99th=[ 9194] 00:29:21.226 bw ( KiB/s): min= 8192, max=206435, per=1.78%, avg=71035.86, stdev=72584.82, samples=7 00:29:21.226 iops : min= 8, max= 201, avg=69.29, stdev=70.70, samples=7 00:29:21.226 lat (msec) : 20=0.27%, 750=45.82%, 1000=9.16%, 2000=5.66%, >=2000=39.08% 00:29:21.226 cpu : usr=0.02%, sys=1.46%, ctx=569, majf=0, minf=32769 00:29:21.226 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0% 00:29:21.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.226 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:21.226 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.226 job2: (groupid=0, jobs=1): err= 0: pid=1239273: Fri Jul 26 20:50:08 2024 00:29:21.226 read: IOPS=110, BW=111MiB/s (116MB/s)(1161MiB/10477msec) 00:29:21.226 slat (usec): min=44, max=2024.4k, avg=8963.67, stdev=60150.18 00:29:21.226 clat (msec): min=62, max=2728, avg=1074.52, stdev=785.14 00:29:21.226 lat (msec): min=514, max=2768, avg=1083.48, stdev=788.00 00:29:21.226 clat percentiles (msec): 00:29:21.226 | 1.00th=[ 514], 5.00th=[ 523], 10.00th=[ 523], 20.00th=[ 527], 00:29:21.226 | 30.00th=[ 558], 40.00th=[ 617], 50.00th=[ 651], 60.00th=[ 659], 00:29:21.226 | 70.00th=[ 877], 80.00th=[ 2198], 90.00th=[ 2500], 95.00th=[ 2601], 00:29:21.226 | 99.00th=[ 2702], 99.50th=[ 2702], 99.90th=[ 2735], 99.95th=[ 2735], 00:29:21.226 | 99.99th=[ 2735] 00:29:21.226 bw ( KiB/s): min=20480, max=247808, per=3.79%, avg=151113.14, stdev=84842.12, samples=14 00:29:21.226 iops : min= 20, max= 242, avg=147.57, stdev=82.85, samples=14 00:29:21.226 lat (msec) : 100=0.09%, 750=68.39%, 1000=3.01%, 2000=5.77%, >=2000=22.74% 00:29:21.226 cpu : usr=0.11%, sys=2.10%, ctx=1614, majf=0, minf=32769 00:29:21.226 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.6% 00:29:21.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.226 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.226 issued rwts: total=1161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.226 job2: (groupid=0, jobs=1): err= 0: pid=1239274: Fri Jul 26 20:50:08 2024 00:29:21.226 read: IOPS=37, BW=37.6MiB/s (39.4MB/s)(391MiB/10393msec) 00:29:21.226 slat (usec): min=43, max=3265.5k, avg=26432.75, stdev=195078.17 00:29:21.226 clat (msec): min=55, max=5578, avg=3167.88, stdev=1579.08 00:29:21.226 lat (msec): min=1224, max=7599, avg=3194.31, stdev=1588.99 00:29:21.226 clat percentiles (msec): 00:29:21.226 | 1.00th=[ 1234], 5.00th=[ 1334], 10.00th=[ 1469], 20.00th=[ 1586], 00:29:21.226 | 30.00th=[ 1921], 40.00th=[ 2265], 50.00th=[ 2668], 60.00th=[ 3037], 00:29:21.227 | 70.00th=[ 4933], 80.00th=[ 5201], 90.00th=[ 5470], 95.00th=[ 5470], 00:29:21.227 | 99.00th=[ 5537], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:29:21.227 | 99.99th=[ 5604] 00:29:21.227 bw ( KiB/s): min= 2048, max=104448, per=1.50%, avg=59847.11, stdev=30954.25, samples=9 00:29:21.227 iops : min= 2, max= 102, avg=58.44, stdev=30.23, samples=9 00:29:21.227 lat (msec) : 100=0.26%, 2000=31.46%, >=2000=68.29% 00:29:21.227 cpu : usr=0.01%, sys=1.20%, ctx=824, majf=0, minf=32769 00:29:21.227 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:29:21.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.227 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:21.227 issued rwts: total=391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.227 job2: (groupid=0, jobs=1): err= 0: pid=1239275: Fri Jul 26 20:50:08 2024 00:29:21.227 read: IOPS=16, BW=16.4MiB/s (17.2MB/s)(171MiB/10418msec) 00:29:21.227 slat (usec): min=64, max=2098.5k, avg=60548.47, stdev=299278.33 00:29:21.227 clat (msec): min=62, max=9718, avg=6645.46, stdev=3050.75 00:29:21.227 lat (msec): min=1752, max=9725, avg=6706.01, stdev=3019.37 00:29:21.227 clat percentiles (msec): 00:29:21.227 | 1.00th=[ 1754], 5.00th=[ 1989], 10.00th=[ 2072], 20.00th=[ 2165], 00:29:21.227 | 30.00th=[ 4329], 40.00th=[ 7953], 50.00th=[ 8020], 60.00th=[ 8658], 00:29:21.227 | 70.00th=[ 8792], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9597], 00:29:21.227 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:29:21.227 | 99.99th=[ 9731] 00:29:21.227 bw ( KiB/s): min=10240, max=77824, per=1.11%, avg=44032.00, stdev=47789.10, samples=2 00:29:21.227 iops : min= 10, max= 76, avg=43.00, stdev=46.67, samples=2 00:29:21.227 lat (msec) : 100=0.58%, 2000=9.36%, >=2000=90.06% 00:29:21.227 cpu : usr=0.00%, sys=0.90%, ctx=388, majf=0, minf=32769 00:29:21.227 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.7%, 16=9.4%, 32=18.7%, >=64=63.2% 00:29:21.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.227 complete : 0=0.0%, 4=97.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.2% 00:29:21.227 issued rwts: total=171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.227 job2: (groupid=0, jobs=1): err= 0: pid=1239276: Fri Jul 26 20:50:08 2024 00:29:21.227 read: IOPS=48, BW=48.8MiB/s (51.1MB/s)(503MiB/10313msec) 00:29:21.227 slat (usec): min=48, max=2120.3k, avg=20433.39, stdev=107551.85 00:29:21.227 clat (msec): min=30, max=5273, avg=2368.68, stdev=1377.53 00:29:21.227 lat (msec): min=642, max=5332, avg=2389.11, stdev=1381.52 00:29:21.227 clat percentiles (msec): 00:29:21.227 | 1.00th=[ 651], 5.00th=[ 978], 10.00th=[ 1150], 20.00th=[ 1301], 00:29:21.227 | 30.00th=[ 1469], 40.00th=[ 1586], 50.00th=[ 1703], 60.00th=[ 1938], 00:29:21.227 | 70.00th=[ 2567], 80.00th=[ 4212], 90.00th=[ 4732], 95.00th=[ 5000], 00:29:21.227 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:29:21.227 | 99.99th=[ 5269] 00:29:21.227 bw ( KiB/s): min=10240, max=182272, per=1.75%, avg=69818.18, stdev=45835.45, samples=11 00:29:21.227 iops : min= 10, max= 178, avg=68.18, stdev=44.76, samples=11 00:29:21.227 lat (msec) : 50=0.20%, 750=2.19%, 1000=2.78%, 2000=56.46%, >=2000=38.37% 00:29:21.227 cpu : usr=0.05%, sys=1.13%, ctx=1165, majf=0, minf=32769 00:29:21.227 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:29:21.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.227 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:21.227 issued rwts: total=503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.227 job2: (groupid=0, jobs=1): err= 0: pid=1239277: Fri Jul 26 20:50:08 2024 00:29:21.227 read: IOPS=28, BW=28.5MiB/s (29.9MB/s)(297MiB/10420msec) 00:29:21.227 slat (usec): min=55, max=2183.1k, avg=34909.95, stdev=220689.84 00:29:21.227 clat (msec): min=50, max=8633, avg=4197.43, stdev=2734.66 00:29:21.227 lat (msec): min=1034, max=9567, avg=4232.34, stdev=2733.44 00:29:21.227 clat percentiles (msec): 00:29:21.227 | 1.00th=[ 1028], 5.00th=[ 1062], 10.00th=[ 1133], 20.00th=[ 2022], 00:29:21.227 | 30.00th=[ 2165], 40.00th=[ 2232], 50.00th=[ 2265], 60.00th=[ 6879], 00:29:21.227 | 70.00th=[ 7148], 80.00th=[ 7349], 90.00th=[ 7550], 95.00th=[ 7684], 00:29:21.227 | 99.00th=[ 7819], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:29:21.227 | 99.99th=[ 8658] 00:29:21.227 bw ( KiB/s): min= 2048, max=135168, per=1.24%, avg=49444.57, stdev=55854.84, samples=7 00:29:21.227 iops : min= 2, max= 132, avg=48.29, stdev=54.55, samples=7 00:29:21.227 lat (msec) : 100=0.34%, 2000=18.86%, >=2000=80.81% 00:29:21.227 cpu : usr=0.01%, sys=0.85%, ctx=549, majf=0, minf=32769 00:29:21.227 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.8% 00:29:21.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.227 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:29:21.227 issued rwts: total=297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.227 job2: (groupid=0, jobs=1): err= 0: pid=1239278: Fri Jul 26 20:50:08 2024 00:29:21.227 read: IOPS=62, BW=62.7MiB/s (65.7MB/s)(654MiB/10435msec) 00:29:21.227 slat (usec): min=45, max=2105.4k, avg=15853.85, stdev=116202.73 00:29:21.227 clat (msec): min=61, max=4900, avg=1842.57, stdev=1508.90 00:29:21.227 lat (msec): min=609, max=4952, avg=1858.42, stdev=1510.52 00:29:21.227 clat percentiles (msec): 00:29:21.227 | 1.00th=[ 617], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 634], 00:29:21.227 | 30.00th=[ 634], 40.00th=[ 693], 50.00th=[ 919], 60.00th=[ 1804], 00:29:21.227 | 70.00th=[ 2433], 80.00th=[ 2735], 90.00th=[ 4597], 95.00th=[ 4732], 00:29:21.227 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:29:21.227 | 99.99th=[ 4933] 00:29:21.227 bw ( KiB/s): min= 4096, max=219136, per=2.70%, avg=107724.80, stdev=78659.08, samples=10 00:29:21.227 iops : min= 4, max= 214, avg=105.20, stdev=76.82, samples=10 00:29:21.227 lat (msec) : 100=0.15%, 750=44.19%, 1000=7.49%, 2000=10.70%, >=2000=37.46% 00:29:21.227 cpu : usr=0.04%, sys=1.58%, ctx=1060, majf=0, minf=32769 00:29:21.227 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:29:21.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.227 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.227 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.227 job2: (groupid=0, jobs=1): err= 0: pid=1239279: Fri Jul 26 20:50:08 2024 00:29:21.227 read: IOPS=35, BW=35.4MiB/s (37.1MB/s)(366MiB/10342msec) 00:29:21.227 slat (usec): min=42, max=2120.2k, avg=28083.79, stdev=198547.06 00:29:21.227 clat (msec): min=61, max=6441, avg=3323.57, stdev=1707.52 00:29:21.227 lat (msec): min=782, max=8562, avg=3351.65, stdev=1718.72 00:29:21.227 clat percentiles (msec): 00:29:21.227 | 1.00th=[ 785], 5.00th=[ 802], 10.00th=[ 835], 20.00th=[ 944], 00:29:21.227 | 30.00th=[ 1217], 40.00th=[ 3540], 50.00th=[ 3775], 60.00th=[ 3977], 00:29:21.227 | 70.00th=[ 5000], 80.00th=[ 5067], 90.00th=[ 5067], 95.00th=[ 5067], 00:29:21.227 | 99.00th=[ 5067], 99.50th=[ 5134], 99.90th=[ 6409], 99.95th=[ 6409], 00:29:21.227 | 99.99th=[ 6409] 00:29:21.227 bw ( KiB/s): min= 2048, max=157696, per=1.75%, avg=69652.43, stdev=64156.92, samples=7 00:29:21.227 iops : min= 2, max= 154, avg=68.00, stdev=62.65, samples=7 00:29:21.227 lat (msec) : 100=0.27%, 1000=23.50%, 2000=6.28%, >=2000=69.95% 00:29:21.227 cpu : usr=0.00%, sys=1.23%, ctx=523, majf=0, minf=32769 00:29:21.227 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.7%, >=64=82.8% 00:29:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.228 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:21.228 issued rwts: total=366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.228 job2: (groupid=0, jobs=1): err= 0: pid=1239280: Fri Jul 26 20:50:08 2024 00:29:21.228 read: IOPS=18, BW=18.7MiB/s (19.6MB/s)(196MiB/10490msec) 00:29:21.228 slat (usec): min=43, max=2119.6k, avg=53233.56, stdev=293390.18 00:29:21.228 clat (msec): min=55, max=10117, avg=6505.75, stdev=3482.50 00:29:21.228 lat (msec): min=1301, max=10118, avg=6558.99, stdev=3459.08 00:29:21.228 clat percentiles (msec): 00:29:21.228 | 1.00th=[ 1301], 5.00th=[ 1368], 10.00th=[ 1435], 20.00th=[ 1636], 00:29:21.228 | 30.00th=[ 3775], 40.00th=[ 6477], 50.00th=[ 8792], 60.00th=[ 8926], 00:29:21.228 | 70.00th=[ 9194], 80.00th=[ 9463], 90.00th=[ 9866], 95.00th=[10000], 00:29:21.228 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:29:21.228 | 99.99th=[10134] 00:29:21.228 bw ( KiB/s): min= 4096, max=65536, per=0.70%, avg=27836.40, stdev=25138.91, samples=5 00:29:21.228 iops : min= 4, max= 64, avg=27.00, stdev=24.43, samples=5 00:29:21.228 lat (msec) : 100=0.51%, 2000=26.53%, >=2000=72.96% 00:29:21.228 cpu : usr=0.00%, sys=1.12%, ctx=401, majf=0, minf=32769 00:29:21.228 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.2%, 32=16.3%, >=64=67.9% 00:29:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.228 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:29:21.228 issued rwts: total=196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.228 job3: (groupid=0, jobs=1): err= 0: pid=1239281: Fri Jul 26 20:50:08 2024 00:29:21.228 read: IOPS=6, BW=7130KiB/s (7301kB/s)(73.0MiB/10484msec) 00:29:21.228 slat (usec): min=497, max=2088.9k, avg=142478.94, stdev=505094.26 00:29:21.228 clat (msec): min=82, max=10482, avg=8878.47, stdev=2695.01 00:29:21.228 lat (msec): min=2132, max=10483, avg=9020.95, stdev=2490.73 00:29:21.228 clat percentiles (msec): 00:29:21.228 | 1.00th=[ 83], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 6477], 00:29:21.228 | 30.00th=[ 8658], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:29:21.228 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10537], 00:29:21.228 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:29:21.228 | 99.99th=[10537] 00:29:21.228 lat (msec) : 100=1.37%, >=2000=98.63% 00:29:21.228 cpu : usr=0.00%, sys=0.73%, ctx=112, majf=0, minf=18689 00:29:21.228 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:29:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.228 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:21.228 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.228 job3: (groupid=0, jobs=1): err= 0: pid=1239282: Fri Jul 26 20:50:08 2024 00:29:21.228 read: IOPS=90, BW=90.1MiB/s (94.5MB/s)(902MiB/10013msec) 00:29:21.228 slat (usec): min=42, max=2106.1k, avg=11082.46, stdev=111337.40 00:29:21.228 clat (msec): min=10, max=6504, avg=848.68, stdev=1257.66 00:29:21.228 lat (msec): min=12, max=6875, avg=859.76, stdev=1279.20 00:29:21.228 clat percentiles (msec): 00:29:21.228 | 1.00th=[ 20], 5.00th=[ 51], 10.00th=[ 159], 20.00th=[ 355], 00:29:21.228 | 30.00th=[ 447], 40.00th=[ 617], 50.00th=[ 659], 60.00th=[ 667], 00:29:21.228 | 70.00th=[ 718], 80.00th=[ 751], 90.00th=[ 751], 95.00th=[ 4866], 00:29:21.228 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:29:21.228 | 99.99th=[ 6477] 00:29:21.228 bw ( KiB/s): min=12288, max=229376, per=4.17%, avg=165888.00, stdev=70717.81, samples=7 00:29:21.228 iops : min= 12, max= 224, avg=162.00, stdev=69.06, samples=7 00:29:21.228 lat (msec) : 20=1.22%, 50=3.77%, 100=3.22%, 250=5.43%, 500=19.51% 00:29:21.228 lat (msec) : 750=52.22%, 1000=8.43%, >=2000=6.21% 00:29:21.228 cpu : usr=0.02%, sys=1.64%, ctx=799, majf=0, minf=32769 00:29:21.228 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:29:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.228 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.228 issued rwts: total=902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.228 job3: (groupid=0, jobs=1): err= 0: pid=1239283: Fri Jul 26 20:50:08 2024 00:29:21.228 read: IOPS=2, BW=2177KiB/s (2229kB/s)(22.0MiB/10350msec) 00:29:21.228 slat (msec): min=4, max=2126, avg=467.48, stdev=844.75 00:29:21.228 clat (msec): min=64, max=10328, avg=6966.02, stdev=3610.84 00:29:21.228 lat (msec): min=2095, max=10348, avg=7433.50, stdev=3329.56 00:29:21.228 clat percentiles (msec): 00:29:21.228 | 1.00th=[ 65], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2140], 00:29:21.228 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10134], 00:29:21.228 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:29:21.228 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:29:21.228 | 99.99th=[10268] 00:29:21.228 lat (msec) : 100=4.55%, >=2000=95.45% 00:29:21.228 cpu : usr=0.00%, sys=0.16%, ctx=79, majf=0, minf=5633 00:29:21.228 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:29:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.228 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:29:21.228 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.228 job3: (groupid=0, jobs=1): err= 0: pid=1239284: Fri Jul 26 20:50:08 2024 00:29:21.228 read: IOPS=139, BW=140MiB/s (146MB/s)(1400MiB/10033msec) 00:29:21.228 slat (usec): min=45, max=53028, avg=7137.76, stdev=8175.05 00:29:21.228 clat (msec): min=32, max=3052, avg=834.71, stdev=295.08 00:29:21.228 lat (msec): min=34, max=3062, avg=841.84, stdev=296.64 00:29:21.228 clat percentiles (msec): 00:29:21.228 | 1.00th=[ 95], 5.00th=[ 372], 10.00th=[ 634], 20.00th=[ 642], 00:29:21.228 | 30.00th=[ 659], 40.00th=[ 718], 50.00th=[ 760], 60.00th=[ 810], 00:29:21.228 | 70.00th=[ 919], 80.00th=[ 1133], 90.00th=[ 1318], 95.00th=[ 1385], 00:29:21.228 | 99.00th=[ 1401], 99.50th=[ 1401], 99.90th=[ 1418], 99.95th=[ 3037], 00:29:21.228 | 99.99th=[ 3037] 00:29:21.228 bw ( KiB/s): min=22528, max=204800, per=3.63%, avg=144704.17, stdev=55221.19, samples=18 00:29:21.228 iops : min= 22, max= 200, avg=141.28, stdev=53.89, samples=18 00:29:21.228 lat (msec) : 50=0.43%, 100=0.64%, 250=1.93%, 500=3.71%, 750=38.79% 00:29:21.228 lat (msec) : 1000=29.43%, 2000=25.00%, >=2000=0.07% 00:29:21.228 cpu : usr=0.04%, sys=2.14%, ctx=1789, majf=0, minf=32769 00:29:21.228 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:29:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.228 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.228 issued rwts: total=1400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.228 job3: (groupid=0, jobs=1): err= 0: pid=1239285: Fri Jul 26 20:50:08 2024 00:29:21.228 read: IOPS=19, BW=19.2MiB/s (20.2MB/s)(200MiB/10397msec) 00:29:21.228 slat (usec): min=584, max=2095.6k, avg=51662.70, stdev=276892.59 00:29:21.228 clat (msec): min=62, max=10388, avg=6082.94, stdev=3393.27 00:29:21.228 lat (msec): min=1474, max=10390, avg=6134.60, stdev=3368.94 00:29:21.228 clat percentiles (msec): 00:29:21.228 | 1.00th=[ 1469], 5.00th=[ 1502], 10.00th=[ 1519], 20.00th=[ 1552], 00:29:21.228 | 30.00th=[ 1636], 40.00th=[ 7349], 50.00th=[ 8288], 60.00th=[ 8490], 00:29:21.228 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9194], 95.00th=[ 9329], 00:29:21.228 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[10402], 99.95th=[10402], 00:29:21.228 | 99.99th=[10402] 00:29:21.228 bw ( KiB/s): min= 2048, max=69632, per=0.62%, avg=24576.00, stdev=31194.21, samples=6 00:29:21.228 iops : min= 2, max= 68, avg=24.00, stdev=30.46, samples=6 00:29:21.228 lat (msec) : 100=0.50%, 2000=31.50%, >=2000=68.00% 00:29:21.228 cpu : usr=0.00%, sys=1.24%, ctx=385, majf=0, minf=32769 00:29:21.228 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.0%, >=64=68.5% 00:29:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.228 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:29:21.228 issued rwts: total=200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.228 job3: (groupid=0, jobs=1): err= 0: pid=1239286: Fri Jul 26 20:50:08 2024 00:29:21.228 read: IOPS=2, BW=2277KiB/s (2332kB/s)(23.0MiB/10344msec) 00:29:21.229 slat (usec): min=1174, max=2090.8k, avg=446068.94, stdev=837648.68 00:29:21.229 clat (msec): min=83, max=10341, avg=6101.48, stdev=3147.04 00:29:21.229 lat (msec): min=2126, max=10343, avg=6547.54, stdev=2977.81 00:29:21.229 clat percentiles (msec): 00:29:21.229 | 1.00th=[ 84], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:29:21.229 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:29:21.229 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10402], 00:29:21.229 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.229 | 99.99th=[10402] 00:29:21.229 lat (msec) : 100=4.35%, >=2000=95.65% 00:29:21.229 cpu : usr=0.00%, sys=0.20%, ctx=63, majf=0, minf=5889 00:29:21.229 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:29:21.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.229 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:29:21.229 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.229 job3: (groupid=0, jobs=1): err= 0: pid=1239287: Fri Jul 26 20:50:08 2024 00:29:21.229 read: IOPS=23, BW=24.0MiB/s (25.1MB/s)(251MiB/10468msec) 00:29:21.229 slat (usec): min=77, max=2119.4k, avg=41439.55, stdev=227914.78 00:29:21.229 clat (msec): min=64, max=9161, avg=5006.69, stdev=3316.61 00:29:21.229 lat (msec): min=1324, max=9172, avg=5048.13, stdev=3309.07 00:29:21.229 clat percentiles (msec): 00:29:21.229 | 1.00th=[ 1351], 5.00th=[ 1452], 10.00th=[ 1485], 20.00th=[ 1552], 00:29:21.229 | 30.00th=[ 1586], 40.00th=[ 1787], 50.00th=[ 5269], 60.00th=[ 7550], 00:29:21.229 | 70.00th=[ 8221], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:29:21.229 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:29:21.229 | 99.99th=[ 9194] 00:29:21.229 bw ( KiB/s): min= 2048, max=86016, per=0.79%, avg=31480.25, stdev=31289.13, samples=8 00:29:21.229 iops : min= 2, max= 84, avg=30.62, stdev=30.56, samples=8 00:29:21.229 lat (msec) : 100=0.40%, 2000=43.03%, >=2000=56.57% 00:29:21.229 cpu : usr=0.01%, sys=1.39%, ctx=525, majf=0, minf=32769 00:29:21.229 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.7%, >=64=74.9% 00:29:21.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.229 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:29:21.229 issued rwts: total=251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.229 job3: (groupid=0, jobs=1): err= 0: pid=1239288: Fri Jul 26 20:50:08 2024 00:29:21.229 read: IOPS=17, BW=17.3MiB/s (18.1MB/s)(179MiB/10367msec) 00:29:21.229 slat (usec): min=413, max=2099.1k, avg=57443.50, stdev=295078.66 00:29:21.229 clat (msec): min=82, max=9591, avg=6727.98, stdev=3292.61 00:29:21.229 lat (msec): min=1459, max=9591, avg=6785.42, stdev=3255.67 00:29:21.229 clat percentiles (msec): 00:29:21.229 | 1.00th=[ 1435], 5.00th=[ 1469], 10.00th=[ 1502], 20.00th=[ 1536], 00:29:21.229 | 30.00th=[ 5403], 40.00th=[ 8356], 50.00th=[ 8658], 60.00th=[ 8792], 00:29:21.229 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9463], 00:29:21.229 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:29:21.229 | 99.99th=[ 9597] 00:29:21.229 bw ( KiB/s): min= 2048, max=61440, per=0.52%, avg=20889.60, stdev=24973.85, samples=5 00:29:21.229 iops : min= 2, max= 60, avg=20.40, stdev=24.39, samples=5 00:29:21.229 lat (msec) : 100=0.56%, 2000=24.58%, >=2000=74.86% 00:29:21.229 cpu : usr=0.01%, sys=1.09%, ctx=398, majf=0, minf=32769 00:29:21.229 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=8.9%, 32=17.9%, >=64=64.8% 00:29:21.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.229 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:29:21.229 issued rwts: total=179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.229 job3: (groupid=0, jobs=1): err= 0: pid=1239289: Fri Jul 26 20:50:08 2024 00:29:21.229 read: IOPS=18, BW=18.5MiB/s (19.4MB/s)(192MiB/10360msec) 00:29:21.229 slat (usec): min=744, max=2135.5k, avg=52093.14, stdev=288133.47 00:29:21.229 clat (msec): min=357, max=9458, avg=1454.38, stdev=1767.25 00:29:21.229 lat (msec): min=362, max=9489, avg=1506.47, stdev=1864.16 00:29:21.229 clat percentiles (msec): 00:29:21.229 | 1.00th=[ 363], 5.00th=[ 414], 10.00th=[ 498], 20.00th=[ 651], 00:29:21.229 | 30.00th=[ 818], 40.00th=[ 978], 50.00th=[ 1200], 60.00th=[ 1234], 00:29:21.229 | 70.00th=[ 1267], 80.00th=[ 1318], 90.00th=[ 1485], 95.00th=[ 7684], 00:29:21.229 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:29:21.229 | 99.99th=[ 9463] 00:29:21.229 bw ( KiB/s): min= 6144, max=124486, per=1.64%, avg=65315.00, stdev=83680.43, samples=2 00:29:21.229 iops : min= 6, max= 121, avg=63.50, stdev=81.32, samples=2 00:29:21.229 lat (msec) : 500=10.94%, 750=15.10%, 1000=14.06%, 2000=52.60%, >=2000=7.29% 00:29:21.229 cpu : usr=0.00%, sys=0.73%, ctx=452, majf=0, minf=32769 00:29:21.229 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.2%, 16=8.3%, 32=16.7%, >=64=67.2% 00:29:21.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.229 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:29:21.229 issued rwts: total=192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.229 job3: (groupid=0, jobs=1): err= 0: pid=1239290: Fri Jul 26 20:50:08 2024 00:29:21.229 read: IOPS=4, BW=4684KiB/s (4796kB/s)(48.0MiB/10494msec) 00:29:21.229 slat (usec): min=953, max=2148.3k, avg=216948.69, stdev=621338.78 00:29:21.229 clat (msec): min=79, max=10492, avg=8823.17, stdev=3099.70 00:29:21.229 lat (msec): min=2119, max=10493, avg=9040.12, stdev=2827.18 00:29:21.229 clat percentiles (msec): 00:29:21.229 | 1.00th=[ 81], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 6477], 00:29:21.229 | 30.00th=[10402], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:29:21.229 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:29:21.229 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:29:21.229 | 99.99th=[10537] 00:29:21.229 lat (msec) : 100=2.08%, >=2000=97.92% 00:29:21.229 cpu : usr=0.00%, sys=0.47%, ctx=115, majf=0, minf=12289 00:29:21.229 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:29:21.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.229 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:21.229 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.229 job3: (groupid=0, jobs=1): err= 0: pid=1239291: Fri Jul 26 20:50:08 2024 00:29:21.229 read: IOPS=4, BW=4601KiB/s (4711kB/s)(47.0MiB/10461msec) 00:29:21.229 slat (usec): min=705, max=2131.3k, avg=220852.16, stdev=626972.66 00:29:21.229 clat (msec): min=80, max=10455, avg=8894.00, stdev=2949.34 00:29:21.229 lat (msec): min=2125, max=10460, avg=9114.86, stdev=2648.28 00:29:21.229 clat percentiles (msec): 00:29:21.229 | 1.00th=[ 81], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 6477], 00:29:21.229 | 30.00th=[10402], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:29:21.229 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:29:21.229 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.229 | 99.99th=[10402] 00:29:21.229 lat (msec) : 100=2.13%, >=2000=97.87% 00:29:21.229 cpu : usr=0.00%, sys=0.53%, ctx=95, majf=0, minf=12033 00:29:21.229 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:29:21.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.229 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:29:21.229 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.229 job3: (groupid=0, jobs=1): err= 0: pid=1239292: Fri Jul 26 20:50:08 2024 00:29:21.229 read: IOPS=15, BW=15.1MiB/s (15.9MB/s)(158MiB/10451msec) 00:29:21.229 slat (usec): min=649, max=2119.4k, avg=65781.01, stdev=315577.56 00:29:21.229 clat (msec): min=56, max=10337, avg=7193.28, stdev=2069.70 00:29:21.229 lat (msec): min=2095, max=10338, avg=7259.06, stdev=2004.35 00:29:21.229 clat percentiles (msec): 00:29:21.229 | 1.00th=[ 2089], 5.00th=[ 2802], 10.00th=[ 4010], 20.00th=[ 6141], 00:29:21.229 | 30.00th=[ 6409], 40.00th=[ 7819], 50.00th=[ 8221], 60.00th=[ 8221], 00:29:21.229 | 70.00th=[ 8356], 80.00th=[ 8423], 90.00th=[ 8557], 95.00th=[10268], 00:29:21.229 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.229 | 99.99th=[10402] 00:29:21.229 bw ( KiB/s): min= 2048, max=45056, per=0.39%, avg=15360.00, stdev=20101.03, samples=4 00:29:21.229 iops : min= 2, max= 44, avg=15.00, stdev=19.63, samples=4 00:29:21.229 lat (msec) : 100=0.63%, >=2000=99.37% 00:29:21.229 cpu : usr=0.00%, sys=1.20%, ctx=224, majf=0, minf=32769 00:29:21.229 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.1%, 16=10.1%, 32=20.3%, >=64=60.1% 00:29:21.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.229 complete : 0=0.0%, 4=96.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.1% 00:29:21.229 issued rwts: total=158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.229 job3: (groupid=0, jobs=1): err= 0: pid=1239293: Fri Jul 26 20:50:08 2024 00:29:21.229 read: IOPS=2, BW=2877KiB/s (2946kB/s)(29.0MiB/10323msec) 00:29:21.229 slat (usec): min=949, max=2091.5k, avg=353720.78, stdev=760482.78 00:29:21.230 clat (msec): min=64, max=10305, avg=5856.90, stdev=2967.49 00:29:21.230 lat (msec): min=2106, max=10322, avg=6210.62, stdev=2861.85 00:29:21.230 clat percentiles (msec): 00:29:21.230 | 1.00th=[ 65], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 2165], 00:29:21.230 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:29:21.230 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:29:21.230 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:29:21.230 | 99.99th=[10268] 00:29:21.230 lat (msec) : 100=3.45%, >=2000=96.55% 00:29:21.230 cpu : usr=0.00%, sys=0.20%, ctx=69, majf=0, minf=7425 00:29:21.230 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:29:21.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.230 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:29:21.230 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.230 job4: (groupid=0, jobs=1): err= 0: pid=1239294: Fri Jul 26 20:50:08 2024 00:29:21.230 read: IOPS=62, BW=62.4MiB/s (65.5MB/s)(646MiB/10345msec) 00:29:21.230 slat (usec): min=42, max=2096.7k, avg=15948.13, stdev=116274.41 00:29:21.230 clat (msec): min=38, max=4701, avg=1603.39, stdev=1387.56 00:29:21.230 lat (msec): min=601, max=4704, avg=1619.34, stdev=1390.70 00:29:21.230 clat percentiles (msec): 00:29:21.230 | 1.00th=[ 600], 5.00th=[ 634], 10.00th=[ 634], 20.00th=[ 634], 00:29:21.230 | 30.00th=[ 642], 40.00th=[ 693], 50.00th=[ 953], 60.00th=[ 1099], 00:29:21.230 | 70.00th=[ 1452], 80.00th=[ 3138], 90.00th=[ 4396], 95.00th=[ 4530], 00:29:21.230 | 99.00th=[ 4665], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:29:21.230 | 99.99th=[ 4732] 00:29:21.230 bw ( KiB/s): min=18432, max=217088, per=2.96%, avg=117873.78, stdev=79570.21, samples=9 00:29:21.230 iops : min= 18, max= 212, avg=115.11, stdev=77.71, samples=9 00:29:21.230 lat (msec) : 50=0.15%, 750=44.43%, 1000=7.89%, 2000=23.68%, >=2000=23.84% 00:29:21.230 cpu : usr=0.01%, sys=1.33%, ctx=874, majf=0, minf=32769 00:29:21.230 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:29:21.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.230 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.230 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.230 job4: (groupid=0, jobs=1): err= 0: pid=1239295: Fri Jul 26 20:50:08 2024 00:29:21.230 read: IOPS=11, BW=11.5MiB/s (12.1MB/s)(120MiB/10433msec) 00:29:21.230 slat (usec): min=597, max=2104.6k, avg=86343.86, stdev=352182.05 00:29:21.230 clat (msec): min=70, max=10422, avg=7755.45, stdev=1766.06 00:29:21.230 lat (msec): min=2118, max=10432, avg=7841.80, stdev=1635.67 00:29:21.230 clat percentiles (msec): 00:29:21.230 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 6544], 20.00th=[ 6879], 00:29:21.230 | 30.00th=[ 7148], 40.00th=[ 7416], 50.00th=[ 7752], 60.00th=[ 7953], 00:29:21.230 | 70.00th=[ 8154], 80.00th=[ 8490], 90.00th=[10268], 95.00th=[10402], 00:29:21.230 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:29:21.230 | 99.99th=[10402] 00:29:21.230 lat (msec) : 100=0.83%, >=2000=99.17% 00:29:21.230 cpu : usr=0.01%, sys=0.75%, ctx=426, majf=0, minf=30721 00:29:21.230 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.7%, 16=13.3%, 32=26.7%, >=64=47.5% 00:29:21.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.230 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:21.230 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.230 job4: (groupid=0, jobs=1): err= 0: pid=1239296: Fri Jul 26 20:50:08 2024 00:29:21.230 read: IOPS=10, BW=10.3MiB/s (10.8MB/s)(106MiB/10327msec) 00:29:21.230 slat (usec): min=620, max=2118.9k, avg=96752.92, stdev=342870.80 00:29:21.230 clat (msec): min=70, max=10291, avg=7002.18, stdev=3030.27 00:29:21.230 lat (msec): min=1555, max=10326, avg=7098.93, stdev=2969.98 00:29:21.230 clat percentiles (msec): 00:29:21.230 | 1.00th=[ 1552], 5.00th=[ 1636], 10.00th=[ 1804], 20.00th=[ 2123], 00:29:21.230 | 30.00th=[ 6879], 40.00th=[ 7349], 50.00th=[ 7684], 60.00th=[ 8154], 00:29:21.230 | 70.00th=[ 8658], 80.00th=[10134], 90.00th=[10134], 95.00th=[10268], 00:29:21.230 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:29:21.230 | 99.99th=[10268] 00:29:21.230 lat (msec) : 100=0.94%, 2000=14.15%, >=2000=84.91% 00:29:21.230 cpu : usr=0.00%, sys=0.68%, ctx=557, majf=0, minf=27137 00:29:21.230 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:29:21.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.230 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:29:21.230 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.230 job4: (groupid=0, jobs=1): err= 0: pid=1239297: Fri Jul 26 20:50:08 2024 00:29:21.230 read: IOPS=48, BW=48.8MiB/s (51.2MB/s)(503MiB/10303msec) 00:29:21.230 slat (usec): min=44, max=2155.3k, avg=20475.84, stdev=146658.44 00:29:21.230 clat (usec): min=1484, max=6378.3k, avg=2461688.85, stdev=1766852.84 00:29:21.230 lat (msec): min=884, max=6447, avg=2482.16, stdev=1772.11 00:29:21.230 clat percentiles (msec): 00:29:21.230 | 1.00th=[ 927], 5.00th=[ 1020], 10.00th=[ 1045], 20.00th=[ 1133], 00:29:21.230 | 30.00th=[ 1167], 40.00th=[ 1569], 50.00th=[ 1687], 60.00th=[ 1787], 00:29:21.230 | 70.00th=[ 1972], 80.00th=[ 5403], 90.00th=[ 5470], 95.00th=[ 5537], 00:29:21.230 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 6409], 99.95th=[ 6409], 00:29:21.230 | 99.99th=[ 6409] 00:29:21.230 bw ( KiB/s): min= 8192, max=159744, per=2.14%, avg=85333.33, stdev=44458.58, samples=9 00:29:21.230 iops : min= 8, max= 156, avg=83.33, stdev=43.42, samples=9 00:29:21.230 lat (msec) : 2=0.20%, 1000=3.78%, 2000=67.00%, >=2000=29.03% 00:29:21.230 cpu : usr=0.02%, sys=1.28%, ctx=607, majf=0, minf=32769 00:29:21.230 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:29:21.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.230 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:21.230 issued rwts: total=503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.230 job4: (groupid=0, jobs=1): err= 0: pid=1239298: Fri Jul 26 20:50:08 2024 00:29:21.230 read: IOPS=82, BW=82.2MiB/s (86.2MB/s)(849MiB/10329msec) 00:29:21.230 slat (usec): min=36, max=2080.6k, avg=12071.40, stdev=100003.74 00:29:21.230 clat (msec): min=73, max=4769, avg=1393.68, stdev=1340.69 00:29:21.230 lat (msec): min=514, max=4770, avg=1405.75, stdev=1343.73 00:29:21.230 clat percentiles (msec): 00:29:21.230 | 1.00th=[ 514], 5.00th=[ 518], 10.00th=[ 523], 20.00th=[ 523], 00:29:21.230 | 30.00th=[ 531], 40.00th=[ 535], 50.00th=[ 625], 60.00th=[ 1045], 00:29:21.230 | 70.00th=[ 1670], 80.00th=[ 1804], 90.00th=[ 4463], 95.00th=[ 4597], 00:29:21.230 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:29:21.230 | 99.99th=[ 4799] 00:29:21.230 bw ( KiB/s): min= 2048, max=253952, per=3.37%, avg=134221.09, stdev=102491.32, samples=11 00:29:21.230 iops : min= 2, max= 248, avg=131.00, stdev=100.13, samples=11 00:29:21.230 lat (msec) : 100=0.12%, 750=56.42%, 1000=3.42%, 2000=23.44%, >=2000=16.61% 00:29:21.230 cpu : usr=0.08%, sys=1.66%, ctx=1076, majf=0, minf=32769 00:29:21.230 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:29:21.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.230 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.230 issued rwts: total=849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.230 job4: (groupid=0, jobs=1): err= 0: pid=1239299: Fri Jul 26 20:50:08 2024 00:29:21.230 read: IOPS=98, BW=98.4MiB/s (103MB/s)(1026MiB/10425msec) 00:29:21.230 slat (usec): min=44, max=2121.6k, avg=10087.31, stdev=92159.08 00:29:21.230 clat (msec): min=70, max=6500, avg=1251.39, stdev=1412.78 00:29:21.230 lat (msec): min=358, max=6524, avg=1261.47, stdev=1416.77 00:29:21.230 clat percentiles (msec): 00:29:21.230 | 1.00th=[ 359], 5.00th=[ 368], 10.00th=[ 368], 20.00th=[ 368], 00:29:21.230 | 30.00th=[ 376], 40.00th=[ 393], 50.00th=[ 443], 60.00th=[ 518], 00:29:21.230 | 70.00th=[ 1167], 80.00th=[ 2366], 90.00th=[ 4396], 95.00th=[ 4530], 00:29:21.230 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 6477], 99.95th=[ 6477], 00:29:21.230 | 99.99th=[ 6477] 00:29:21.230 bw ( KiB/s): min= 6144, max=350208, per=3.55%, avg=141469.54, stdev=130830.34, samples=13 00:29:21.230 iops : min= 6, max= 342, avg=138.15, stdev=127.76, samples=13 00:29:21.230 lat (msec) : 100=0.10%, 500=59.06%, 750=6.53%, 1000=2.53%, 2000=8.09% 00:29:21.230 lat (msec) : >=2000=23.68% 00:29:21.230 cpu : usr=0.05%, sys=1.34%, ctx=1408, majf=0, minf=32769 00:29:21.230 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:29:21.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.230 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.230 issued rwts: total=1026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.230 job4: (groupid=0, jobs=1): err= 0: pid=1239300: Fri Jul 26 20:50:08 2024 00:29:21.230 read: IOPS=66, BW=66.3MiB/s (69.6MB/s)(666MiB/10041msec) 00:29:21.230 slat (usec): min=36, max=2132.8k, avg=15040.94, stdev=102752.78 00:29:21.230 clat (msec): min=20, max=6054, avg=1523.75, stdev=1379.89 00:29:21.230 lat (msec): min=41, max=6055, avg=1538.79, stdev=1396.22 00:29:21.230 clat percentiles (msec): 00:29:21.230 | 1.00th=[ 77], 5.00th=[ 300], 10.00th=[ 634], 20.00th=[ 743], 00:29:21.230 | 30.00th=[ 751], 40.00th=[ 760], 50.00th=[ 810], 60.00th=[ 1116], 00:29:21.230 | 70.00th=[ 1351], 80.00th=[ 3004], 90.00th=[ 3842], 95.00th=[ 4329], 00:29:21.230 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6074], 99.95th=[ 6074], 00:29:21.230 | 99.99th=[ 6074] 00:29:21.231 bw ( KiB/s): min=28672, max=176128, per=3.09%, avg=123136.00, stdev=53376.55, samples=8 00:29:21.231 iops : min= 28, max= 172, avg=120.25, stdev=52.13, samples=8 00:29:21.231 lat (msec) : 50=0.75%, 100=1.35%, 250=2.25%, 500=3.60%, 750=24.47% 00:29:21.231 lat (msec) : 1000=24.32%, 2000=21.92%, >=2000=21.32% 00:29:21.231 cpu : usr=0.00%, sys=1.27%, ctx=1092, majf=0, minf=32769 00:29:21.231 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:29:21.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.231 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.231 issued rwts: total=666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.231 job4: (groupid=0, jobs=1): err= 0: pid=1239301: Fri Jul 26 20:50:08 2024 00:29:21.231 read: IOPS=32, BW=32.3MiB/s (33.8MB/s)(333MiB/10320msec) 00:29:21.231 slat (usec): min=51, max=2104.7k, avg=30774.40, stdev=196499.75 00:29:21.231 clat (msec): min=70, max=8828, avg=3781.44, stdev=2968.38 00:29:21.231 lat (msec): min=767, max=8841, avg=3812.21, stdev=2971.31 00:29:21.231 clat percentiles (msec): 00:29:21.231 | 1.00th=[ 768], 5.00th=[ 802], 10.00th=[ 827], 20.00th=[ 902], 00:29:21.231 | 30.00th=[ 1167], 40.00th=[ 2089], 50.00th=[ 2366], 60.00th=[ 2467], 00:29:21.231 | 70.00th=[ 6745], 80.00th=[ 7215], 90.00th=[ 8087], 95.00th=[ 8658], 00:29:21.231 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:29:21.231 | 99.99th=[ 8792] 00:29:21.231 bw ( KiB/s): min= 4096, max=110813, per=1.17%, avg=46673.44, stdev=33041.29, samples=9 00:29:21.231 iops : min= 4, max= 108, avg=45.56, stdev=32.21, samples=9 00:29:21.231 lat (msec) : 100=0.30%, 1000=25.83%, 2000=12.91%, >=2000=60.96% 00:29:21.231 cpu : usr=0.00%, sys=1.30%, ctx=612, majf=0, minf=32769 00:29:21.231 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.1% 00:29:21.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.231 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:29:21.231 issued rwts: total=333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.231 job4: (groupid=0, jobs=1): err= 0: pid=1239302: Fri Jul 26 20:50:08 2024 00:29:21.231 read: IOPS=38, BW=38.1MiB/s (39.9MB/s)(397MiB/10421msec) 00:29:21.231 slat (usec): min=427, max=2021.9k, avg=26129.48, stdev=127889.21 00:29:21.231 clat (msec): min=44, max=4231, avg=2648.43, stdev=604.60 00:29:21.231 lat (msec): min=1442, max=5151, avg=2674.56, stdev=600.49 00:29:21.231 clat percentiles (msec): 00:29:21.231 | 1.00th=[ 1452], 5.00th=[ 1569], 10.00th=[ 1720], 20.00th=[ 2165], 00:29:21.231 | 30.00th=[ 2400], 40.00th=[ 2601], 50.00th=[ 2668], 60.00th=[ 2735], 00:29:21.231 | 70.00th=[ 2836], 80.00th=[ 3004], 90.00th=[ 3507], 95.00th=[ 3809], 00:29:21.231 | 99.00th=[ 3943], 99.50th=[ 3943], 99.90th=[ 4245], 99.95th=[ 4245], 00:29:21.231 | 99.99th=[ 4245] 00:29:21.231 bw ( KiB/s): min=32833, max=96256, per=1.54%, avg=61254.22, stdev=20384.45, samples=9 00:29:21.231 iops : min= 32, max= 94, avg=59.78, stdev=19.89, samples=9 00:29:21.231 lat (msec) : 50=0.25%, 2000=11.84%, >=2000=87.91% 00:29:21.231 cpu : usr=0.00%, sys=1.07%, ctx=1082, majf=0, minf=32769 00:29:21.231 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:29:21.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.231 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:21.231 issued rwts: total=397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.231 job4: (groupid=0, jobs=1): err= 0: pid=1239303: Fri Jul 26 20:50:08 2024 00:29:21.231 read: IOPS=48, BW=48.1MiB/s (50.4MB/s)(498MiB/10354msec) 00:29:21.231 slat (usec): min=47, max=2125.9k, avg=20082.64, stdev=124303.86 00:29:21.231 clat (msec): min=349, max=6972, avg=2471.13, stdev=2242.95 00:29:21.231 lat (msec): min=353, max=6981, avg=2491.21, stdev=2251.87 00:29:21.231 clat percentiles (msec): 00:29:21.231 | 1.00th=[ 384], 5.00th=[ 575], 10.00th=[ 659], 20.00th=[ 793], 00:29:21.231 | 30.00th=[ 969], 40.00th=[ 1150], 50.00th=[ 1200], 60.00th=[ 1636], 00:29:21.231 | 70.00th=[ 2400], 80.00th=[ 5604], 90.00th=[ 6544], 95.00th=[ 6812], 00:29:21.231 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:29:21.231 | 99.99th=[ 6946] 00:29:21.231 bw ( KiB/s): min= 4096, max=204800, per=1.59%, avg=63232.83, stdev=54115.43, samples=12 00:29:21.231 iops : min= 4, max= 200, avg=61.75, stdev=52.85, samples=12 00:29:21.231 lat (msec) : 500=3.41%, 750=14.06%, 1000=14.26%, 2000=31.73%, >=2000=36.55% 00:29:21.231 cpu : usr=0.00%, sys=1.09%, ctx=1079, majf=0, minf=32769 00:29:21.231 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.3% 00:29:21.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.231 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:21.231 issued rwts: total=498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.231 job4: (groupid=0, jobs=1): err= 0: pid=1239304: Fri Jul 26 20:50:08 2024 00:29:21.231 read: IOPS=108, BW=109MiB/s (114MB/s)(1124MiB/10333msec) 00:29:21.231 slat (usec): min=37, max=2025.4k, avg=9108.63, stdev=85650.79 00:29:21.231 clat (msec): min=86, max=2565, avg=1017.76, stdev=859.89 00:29:21.231 lat (msec): min=362, max=2567, avg=1026.87, stdev=861.91 00:29:21.231 clat percentiles (msec): 00:29:21.231 | 1.00th=[ 368], 5.00th=[ 368], 10.00th=[ 372], 20.00th=[ 372], 00:29:21.231 | 30.00th=[ 372], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 885], 00:29:21.231 | 70.00th=[ 1083], 80.00th=[ 2265], 90.00th=[ 2534], 95.00th=[ 2534], 00:29:21.231 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:29:21.231 | 99.99th=[ 2567] 00:29:21.231 bw ( KiB/s): min=14336, max=350208, per=5.12%, avg=203980.80, stdev=133202.94, samples=10 00:29:21.231 iops : min= 14, max= 342, avg=199.20, stdev=130.08, samples=10 00:29:21.231 lat (msec) : 100=0.09%, 500=54.89%, 750=1.96%, 1000=9.70%, 2000=8.90% 00:29:21.231 lat (msec) : >=2000=24.47% 00:29:21.231 cpu : usr=0.11%, sys=2.12%, ctx=1302, majf=0, minf=32769 00:29:21.231 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:29:21.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.231 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.231 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.231 job4: (groupid=0, jobs=1): err= 0: pid=1239305: Fri Jul 26 20:50:08 2024 00:29:21.231 read: IOPS=43, BW=43.3MiB/s (45.4MB/s)(451MiB/10424msec) 00:29:21.231 slat (usec): min=492, max=2022.3k, avg=22949.36, stdev=121810.06 00:29:21.231 clat (msec): min=70, max=4283, avg=2611.51, stdev=949.31 00:29:21.231 lat (msec): min=1299, max=6305, avg=2634.46, stdev=955.92 00:29:21.231 clat percentiles (msec): 00:29:21.231 | 1.00th=[ 1301], 5.00th=[ 1351], 10.00th=[ 1385], 20.00th=[ 1502], 00:29:21.231 | 30.00th=[ 2072], 40.00th=[ 2232], 50.00th=[ 2400], 60.00th=[ 2635], 00:29:21.231 | 70.00th=[ 3071], 80.00th=[ 3910], 90.00th=[ 3977], 95.00th=[ 4077], 00:29:21.231 | 99.00th=[ 4111], 99.50th=[ 4178], 99.90th=[ 4279], 99.95th=[ 4279], 00:29:21.231 | 99.99th=[ 4279] 00:29:21.231 bw ( KiB/s): min=14336, max=110592, per=1.51%, avg=60136.73, stdev=31678.17, samples=11 00:29:21.231 iops : min= 14, max= 108, avg=58.73, stdev=30.94, samples=11 00:29:21.231 lat (msec) : 100=0.22%, 2000=25.06%, >=2000=74.72% 00:29:21.231 cpu : usr=0.00%, sys=1.49%, ctx=1127, majf=0, minf=32769 00:29:21.231 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.0% 00:29:21.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.231 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:21.231 issued rwts: total=451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.231 job4: (groupid=0, jobs=1): err= 0: pid=1239306: Fri Jul 26 20:50:08 2024 00:29:21.231 read: IOPS=22, BW=22.6MiB/s (23.7MB/s)(237MiB/10477msec) 00:29:21.231 slat (usec): min=959, max=2094.9k, avg=43905.33, stdev=222042.76 00:29:21.231 clat (msec): min=69, max=7368, avg=4769.12, stdev=1673.08 00:29:21.231 lat (msec): min=2102, max=7427, avg=4813.03, stdev=1641.71 00:29:21.231 clat percentiles (msec): 00:29:21.231 | 1.00th=[ 2106], 5.00th=[ 2333], 10.00th=[ 2366], 20.00th=[ 2635], 00:29:21.231 | 30.00th=[ 3742], 40.00th=[ 4178], 50.00th=[ 5336], 60.00th=[ 5604], 00:29:21.231 | 70.00th=[ 5940], 80.00th=[ 6342], 90.00th=[ 6946], 95.00th=[ 7148], 00:29:21.231 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[ 7349], 99.95th=[ 7349], 00:29:21.231 | 99.99th=[ 7349] 00:29:21.231 bw ( KiB/s): min=10240, max=75776, per=0.93%, avg=37205.33, stdev=27336.52, samples=6 00:29:21.231 iops : min= 10, max= 74, avg=36.33, stdev=26.70, samples=6 00:29:21.231 lat (msec) : 100=0.42%, >=2000=99.58% 00:29:21.231 cpu : usr=0.00%, sys=1.11%, ctx=686, majf=0, minf=32769 00:29:21.231 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.5%, >=64=73.4% 00:29:21.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.231 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:29:21.231 issued rwts: total=237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.231 job5: (groupid=0, jobs=1): err= 0: pid=1239307: Fri Jul 26 20:50:08 2024 00:29:21.231 read: IOPS=62, BW=62.8MiB/s (65.8MB/s)(653MiB/10403msec) 00:29:21.231 slat (usec): min=60, max=2066.3k, avg=15785.12, stdev=149057.48 00:29:21.232 clat (msec): min=92, max=8442, avg=1378.60, stdev=1700.40 00:29:21.232 lat (msec): min=279, max=8474, avg=1394.39, stdev=1717.74 00:29:21.232 clat percentiles (msec): 00:29:21.232 | 1.00th=[ 279], 5.00th=[ 284], 10.00th=[ 288], 20.00th=[ 292], 00:29:21.232 | 30.00th=[ 326], 40.00th=[ 477], 50.00th=[ 510], 60.00th=[ 542], 00:29:21.232 | 70.00th=[ 584], 80.00th=[ 4245], 90.00th=[ 4530], 95.00th=[ 4597], 00:29:21.232 | 99.00th=[ 4665], 99.50th=[ 6409], 99.90th=[ 8423], 99.95th=[ 8423], 00:29:21.232 | 99.99th=[ 8423] 00:29:21.232 bw ( KiB/s): min=16384, max=434176, per=4.50%, avg=179200.00, stdev=166515.12, samples=6 00:29:21.232 iops : min= 16, max= 424, avg=175.00, stdev=162.61, samples=6 00:29:21.232 lat (msec) : 100=0.15%, 500=45.48%, 750=28.64%, >=2000=25.73% 00:29:21.232 cpu : usr=0.02%, sys=1.09%, ctx=1131, majf=0, minf=32769 00:29:21.232 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.4% 00:29:21.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.232 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.232 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.232 job5: (groupid=0, jobs=1): err= 0: pid=1239308: Fri Jul 26 20:50:08 2024 00:29:21.232 read: IOPS=65, BW=65.2MiB/s (68.4MB/s)(657MiB/10076msec) 00:29:21.232 slat (usec): min=76, max=2072.0k, avg=15253.29, stdev=106188.58 00:29:21.232 clat (msec): min=50, max=3868, avg=1582.31, stdev=1154.43 00:29:21.232 lat (msec): min=90, max=3877, avg=1597.56, stdev=1156.45 00:29:21.232 clat percentiles (msec): 00:29:21.232 | 1.00th=[ 192], 5.00th=[ 485], 10.00th=[ 489], 20.00th=[ 510], 00:29:21.232 | 30.00th=[ 550], 40.00th=[ 1053], 50.00th=[ 1099], 60.00th=[ 1502], 00:29:21.232 | 70.00th=[ 2198], 80.00th=[ 2433], 90.00th=[ 3574], 95.00th=[ 3742], 00:29:21.232 | 99.00th=[ 3842], 99.50th=[ 3876], 99.90th=[ 3876], 99.95th=[ 3876], 00:29:21.232 | 99.99th=[ 3876] 00:29:21.232 bw ( KiB/s): min=28672, max=260096, per=2.26%, avg=90170.17, stdev=63494.47, samples=12 00:29:21.232 iops : min= 28, max= 254, avg=88.00, stdev=62.04, samples=12 00:29:21.232 lat (msec) : 100=0.30%, 250=1.52%, 500=15.37%, 750=18.26%, 1000=0.91% 00:29:21.232 lat (msec) : 2000=29.53%, >=2000=34.09% 00:29:21.232 cpu : usr=0.07%, sys=1.15%, ctx=1771, majf=0, minf=32769 00:29:21.232 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:29:21.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.232 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:29:21.232 issued rwts: total=657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.232 job5: (groupid=0, jobs=1): err= 0: pid=1239309: Fri Jul 26 20:50:08 2024 00:29:21.232 read: IOPS=31, BW=31.5MiB/s (33.0MB/s)(316MiB/10041msec) 00:29:21.232 slat (usec): min=610, max=2112.1k, avg=31672.01, stdev=191681.85 00:29:21.232 clat (msec): min=30, max=8253, avg=2391.49, stdev=2465.24 00:29:21.232 lat (msec): min=43, max=8264, avg=2423.16, stdev=2483.58 00:29:21.232 clat percentiles (msec): 00:29:21.232 | 1.00th=[ 72], 5.00th=[ 211], 10.00th=[ 439], 20.00th=[ 844], 00:29:21.232 | 30.00th=[ 1234], 40.00th=[ 1435], 50.00th=[ 1519], 60.00th=[ 1620], 00:29:21.232 | 70.00th=[ 1854], 80.00th=[ 2232], 90.00th=[ 7819], 95.00th=[ 8020], 00:29:21.232 | 99.00th=[ 8154], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:29:21.232 | 99.99th=[ 8221] 00:29:21.232 bw ( KiB/s): min=55296, max=100352, per=1.94%, avg=77414.40, stdev=18736.68, samples=5 00:29:21.232 iops : min= 54, max= 98, avg=75.60, stdev=18.30, samples=5 00:29:21.232 lat (msec) : 50=0.63%, 100=0.95%, 250=4.43%, 500=6.01%, 750=5.06% 00:29:21.232 lat (msec) : 1000=9.18%, 2000=45.89%, >=2000=27.85% 00:29:21.232 cpu : usr=0.00%, sys=1.21%, ctx=1201, majf=0, minf=32769 00:29:21.232 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.1%, >=64=80.1% 00:29:21.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.232 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:29:21.232 issued rwts: total=316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.232 job5: (groupid=0, jobs=1): err= 0: pid=1239310: Fri Jul 26 20:50:08 2024 00:29:21.232 read: IOPS=196, BW=196MiB/s (206MB/s)(1973MiB/10055msec) 00:29:21.232 slat (usec): min=40, max=81254, avg=5063.15, stdev=8770.71 00:29:21.232 clat (msec): min=53, max=916, avg=623.35, stdev=156.61 00:29:21.232 lat (msec): min=66, max=918, avg=628.41, stdev=157.47 00:29:21.232 clat percentiles (msec): 00:29:21.232 | 1.00th=[ 144], 5.00th=[ 368], 10.00th=[ 388], 20.00th=[ 502], 00:29:21.232 | 30.00th=[ 600], 40.00th=[ 609], 50.00th=[ 625], 60.00th=[ 659], 00:29:21.232 | 70.00th=[ 709], 80.00th=[ 751], 90.00th=[ 835], 95.00th=[ 852], 00:29:21.232 | 99.00th=[ 877], 99.50th=[ 885], 99.90th=[ 902], 99.95th=[ 919], 00:29:21.232 | 99.99th=[ 919] 00:29:21.232 bw ( KiB/s): min=108544, max=323584, per=4.99%, avg=198847.68, stdev=52400.32, samples=19 00:29:21.232 iops : min= 106, max= 316, avg=194.16, stdev=51.17, samples=19 00:29:21.232 lat (msec) : 100=0.61%, 250=1.52%, 500=17.89%, 750=59.60%, 1000=20.38% 00:29:21.232 cpu : usr=0.16%, sys=2.61%, ctx=1776, majf=0, minf=32769 00:29:21.232 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:29:21.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.232 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.232 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.232 job5: (groupid=0, jobs=1): err= 0: pid=1239311: Fri Jul 26 20:50:08 2024 00:29:21.232 read: IOPS=87, BW=87.9MiB/s (92.2MB/s)(882MiB/10034msec) 00:29:21.232 slat (usec): min=41, max=2114.0k, avg=11333.56, stdev=114905.05 00:29:21.232 clat (msec): min=32, max=7019, avg=647.19, stdev=917.18 00:29:21.232 lat (msec): min=34, max=7033, avg=658.52, stdev=943.03 00:29:21.232 clat percentiles (msec): 00:29:21.232 | 1.00th=[ 65], 5.00th=[ 178], 10.00th=[ 313], 20.00th=[ 384], 00:29:21.232 | 30.00th=[ 388], 40.00th=[ 388], 50.00th=[ 397], 60.00th=[ 414], 00:29:21.232 | 70.00th=[ 531], 80.00th=[ 642], 90.00th=[ 1045], 95.00th=[ 1183], 00:29:21.232 | 99.00th=[ 6946], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:29:21.232 | 99.99th=[ 7013] 00:29:21.232 bw ( KiB/s): min=110592, max=348160, per=6.47%, avg=257706.67, stdev=91885.77, samples=6 00:29:21.232 iops : min= 108, max= 340, avg=251.67, stdev=89.73, samples=6 00:29:21.232 lat (msec) : 50=0.68%, 100=1.81%, 250=5.33%, 500=58.73%, 750=16.33% 00:29:21.232 lat (msec) : 1000=5.67%, 2000=8.39%, >=2000=3.06% 00:29:21.232 cpu : usr=0.07%, sys=1.59%, ctx=947, majf=0, minf=32769 00:29:21.232 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:29:21.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.232 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.232 issued rwts: total=882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.232 job5: (groupid=0, jobs=1): err= 0: pid=1239312: Fri Jul 26 20:50:08 2024 00:29:21.232 read: IOPS=49, BW=49.5MiB/s (51.9MB/s)(496MiB/10015msec) 00:29:21.232 slat (usec): min=455, max=2091.8k, avg=20158.67, stdev=151402.65 00:29:21.232 clat (msec): min=13, max=5664, avg=963.02, stdev=821.97 00:29:21.232 lat (msec): min=14, max=7359, avg=983.17, stdev=872.43 00:29:21.232 clat percentiles (msec): 00:29:21.232 | 1.00th=[ 17], 5.00th=[ 53], 10.00th=[ 161], 20.00th=[ 506], 00:29:21.232 | 30.00th=[ 684], 40.00th=[ 709], 50.00th=[ 726], 60.00th=[ 743], 00:29:21.232 | 70.00th=[ 1150], 80.00th=[ 1552], 90.00th=[ 1737], 95.00th=[ 1838], 00:29:21.232 | 99.00th=[ 5604], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:29:21.232 | 99.99th=[ 5671] 00:29:21.232 bw ( KiB/s): min=26624, max=184320, per=2.78%, avg=110592.00, stdev=73113.89, samples=5 00:29:21.232 iops : min= 26, max= 180, avg=108.00, stdev=71.40, samples=5 00:29:21.232 lat (msec) : 20=1.61%, 50=3.23%, 100=2.62%, 250=5.04%, 500=7.46% 00:29:21.232 lat (msec) : 750=41.73%, 1000=6.85%, 2000=28.63%, >=2000=2.82% 00:29:21.232 cpu : usr=0.06%, sys=1.10%, ctx=1350, majf=0, minf=32769 00:29:21.232 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.3% 00:29:21.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.232 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:21.232 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.232 job5: (groupid=0, jobs=1): err= 0: pid=1239313: Fri Jul 26 20:50:08 2024 00:29:21.233 read: IOPS=44, BW=44.6MiB/s (46.7MB/s)(447MiB/10033msec) 00:29:21.233 slat (usec): min=433, max=2066.8k, avg=22374.04, stdev=127165.45 00:29:21.233 clat (msec): min=29, max=4690, avg=2202.36, stdev=1334.98 00:29:21.233 lat (msec): min=35, max=4706, avg=2224.73, stdev=1339.84 00:29:21.233 clat percentiles (msec): 00:29:21.233 | 1.00th=[ 53], 5.00th=[ 176], 10.00th=[ 380], 20.00th=[ 995], 00:29:21.233 | 30.00th=[ 1318], 40.00th=[ 1620], 50.00th=[ 2022], 60.00th=[ 2433], 00:29:21.233 | 70.00th=[ 3339], 80.00th=[ 3608], 90.00th=[ 4144], 95.00th=[ 4329], 00:29:21.233 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:29:21.233 | 99.99th=[ 4665] 00:29:21.233 bw ( KiB/s): min= 4096, max=129024, per=1.50%, avg=59578.18, stdev=36480.33, samples=11 00:29:21.233 iops : min= 4, max= 126, avg=58.18, stdev=35.63, samples=11 00:29:21.233 lat (msec) : 50=0.89%, 100=1.57%, 250=4.70%, 500=4.92%, 750=4.03% 00:29:21.233 lat (msec) : 1000=4.03%, 2000=29.75%, >=2000=50.11% 00:29:21.233 cpu : usr=0.00%, sys=1.14%, ctx=1333, majf=0, minf=32769 00:29:21.233 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:29:21.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.233 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:21.233 issued rwts: total=447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.233 job5: (groupid=0, jobs=1): err= 0: pid=1239314: Fri Jul 26 20:50:08 2024 00:29:21.233 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(409MiB/10078msec) 00:29:21.233 slat (usec): min=1268, max=2083.5k, avg=24451.72, stdev=155044.34 00:29:21.233 clat (msec): min=74, max=6984, avg=1546.90, stdev=1314.75 00:29:21.233 lat (msec): min=78, max=7169, avg=1571.35, stdev=1342.78 00:29:21.233 clat percentiles (msec): 00:29:21.233 | 1.00th=[ 103], 5.00th=[ 330], 10.00th=[ 451], 20.00th=[ 785], 00:29:21.233 | 30.00th=[ 1062], 40.00th=[ 1183], 50.00th=[ 1301], 60.00th=[ 1385], 00:29:21.233 | 70.00th=[ 1502], 80.00th=[ 1603], 90.00th=[ 3406], 95.00th=[ 5134], 00:29:21.233 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 7013], 99.95th=[ 7013], 00:29:21.233 | 99.99th=[ 7013] 00:29:21.233 bw ( KiB/s): min= 4096, max=126976, per=2.07%, avg=82505.14, stdev=40955.12, samples=7 00:29:21.233 iops : min= 4, max= 124, avg=80.57, stdev=40.00, samples=7 00:29:21.233 lat (msec) : 100=0.98%, 250=3.18%, 500=7.82%, 750=6.60%, 1000=7.82% 00:29:21.233 lat (msec) : 2000=63.57%, >=2000=10.02% 00:29:21.233 cpu : usr=0.03%, sys=0.93%, ctx=1543, majf=0, minf=32769 00:29:21.233 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.8%, >=64=84.6% 00:29:21.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.233 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:21.233 issued rwts: total=409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.233 job5: (groupid=0, jobs=1): err= 0: pid=1239315: Fri Jul 26 20:50:08 2024 00:29:21.233 read: IOPS=109, BW=110MiB/s (115MB/s)(1108MiB/10078msec) 00:29:21.233 slat (usec): min=43, max=121570, avg=9044.33, stdev=17797.67 00:29:21.233 clat (msec): min=50, max=4038, avg=1096.15, stdev=921.74 00:29:21.233 lat (msec): min=93, max=4039, avg=1105.19, stdev=925.05 00:29:21.233 clat percentiles (msec): 00:29:21.233 | 1.00th=[ 363], 5.00th=[ 372], 10.00th=[ 384], 20.00th=[ 451], 00:29:21.233 | 30.00th=[ 567], 40.00th=[ 609], 50.00th=[ 642], 60.00th=[ 718], 00:29:21.233 | 70.00th=[ 1183], 80.00th=[ 1770], 90.00th=[ 2500], 95.00th=[ 3306], 00:29:21.233 | 99.00th=[ 4010], 99.50th=[ 4010], 99.90th=[ 4044], 99.95th=[ 4044], 00:29:21.233 | 99.99th=[ 4044] 00:29:21.233 bw ( KiB/s): min=20480, max=315392, per=2.65%, avg=105633.68, stdev=100668.03, samples=19 00:29:21.233 iops : min= 20, max= 308, avg=103.16, stdev=98.31, samples=19 00:29:21.233 lat (msec) : 100=0.18%, 250=0.27%, 500=24.46%, 750=36.28%, 1000=5.42% 00:29:21.233 lat (msec) : 2000=17.87%, >=2000=15.52% 00:29:21.233 cpu : usr=0.09%, sys=1.67%, ctx=1876, majf=0, minf=32769 00:29:21.233 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:29:21.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.233 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.233 issued rwts: total=1108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.233 job5: (groupid=0, jobs=1): err= 0: pid=1239316: Fri Jul 26 20:50:08 2024 00:29:21.233 read: IOPS=40, BW=40.7MiB/s (42.7MB/s)(410MiB/10062msec) 00:29:21.233 slat (usec): min=55, max=2112.0k, avg=24432.75, stdev=166783.49 00:29:21.233 clat (msec): min=41, max=6637, avg=1407.55, stdev=1212.94 00:29:21.233 lat (msec): min=69, max=6656, avg=1431.98, stdev=1238.31 00:29:21.233 clat percentiles (msec): 00:29:21.233 | 1.00th=[ 184], 5.00th=[ 642], 10.00th=[ 642], 20.00th=[ 651], 00:29:21.233 | 30.00th=[ 651], 40.00th=[ 693], 50.00th=[ 743], 60.00th=[ 1217], 00:29:21.233 | 70.00th=[ 1754], 80.00th=[ 2265], 90.00th=[ 2534], 95.00th=[ 2769], 00:29:21.233 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:29:21.233 | 99.99th=[ 6611] 00:29:21.233 bw ( KiB/s): min=22528, max=190464, per=2.07%, avg=82505.14, stdev=67931.83, samples=7 00:29:21.233 iops : min= 22, max= 186, avg=80.57, stdev=66.34, samples=7 00:29:21.233 lat (msec) : 50=0.24%, 100=0.49%, 250=0.73%, 500=1.22%, 750=47.32% 00:29:21.233 lat (msec) : 1000=4.88%, 2000=20.24%, >=2000=24.88% 00:29:21.233 cpu : usr=0.05%, sys=1.38%, ctx=743, majf=0, minf=32769 00:29:21.233 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.8%, >=64=84.6% 00:29:21.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.233 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:29:21.233 issued rwts: total=410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.233 job5: (groupid=0, jobs=1): err= 0: pid=1239317: Fri Jul 26 20:50:08 2024 00:29:21.233 read: IOPS=23, BW=23.3MiB/s (24.4MB/s)(234MiB/10049msec) 00:29:21.233 slat (usec): min=652, max=2099.4k, avg=42738.77, stdev=217712.54 00:29:21.233 clat (msec): min=46, max=7956, avg=2282.67, stdev=1776.60 00:29:21.233 lat (msec): min=50, max=7970, avg=2325.41, stdev=1813.01 00:29:21.233 clat percentiles (msec): 00:29:21.233 | 1.00th=[ 53], 5.00th=[ 218], 10.00th=[ 456], 20.00th=[ 953], 00:29:21.233 | 30.00th=[ 1452], 40.00th=[ 1938], 50.00th=[ 2333], 60.00th=[ 2467], 00:29:21.233 | 70.00th=[ 2500], 80.00th=[ 2534], 90.00th=[ 2567], 95.00th=[ 7819], 00:29:21.233 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:29:21.233 | 99.99th=[ 7953] 00:29:21.233 bw ( KiB/s): min=22528, max=57344, per=1.10%, avg=43827.20, stdev=13553.99, samples=5 00:29:21.233 iops : min= 22, max= 56, avg=42.80, stdev=13.24, samples=5 00:29:21.233 lat (msec) : 50=0.43%, 100=1.71%, 250=3.42%, 500=5.13%, 750=5.98% 00:29:21.233 lat (msec) : 1000=4.27%, 2000=20.09%, >=2000=58.97% 00:29:21.233 cpu : usr=0.02%, sys=0.91%, ctx=827, majf=0, minf=32769 00:29:21.233 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.7%, >=64=73.1% 00:29:21.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.233 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:29:21.233 issued rwts: total=234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.233 job5: (groupid=0, jobs=1): err= 0: pid=1239318: Fri Jul 26 20:50:08 2024 00:29:21.233 read: IOPS=106, BW=106MiB/s (112MB/s)(1116MiB/10491msec) 00:29:21.233 slat (usec): min=42, max=2085.0k, avg=9312.30, stdev=99616.32 00:29:21.233 clat (msec): min=93, max=5277, avg=1046.57, stdev=1433.92 00:29:21.233 lat (msec): min=245, max=5281, avg=1055.88, stdev=1438.68 00:29:21.233 clat percentiles (msec): 00:29:21.233 | 1.00th=[ 247], 5.00th=[ 255], 10.00th=[ 279], 20.00th=[ 359], 00:29:21.233 | 30.00th=[ 426], 40.00th=[ 464], 50.00th=[ 489], 60.00th=[ 506], 00:29:21.233 | 70.00th=[ 514], 80.00th=[ 651], 90.00th=[ 4463], 95.00th=[ 5067], 00:29:21.233 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:29:21.233 | 99.99th=[ 5269] 00:29:21.233 bw ( KiB/s): min=14336, max=395264, per=5.08%, avg=202342.40, stdev=139310.17, samples=10 00:29:21.233 iops : min= 14, max= 386, avg=197.60, stdev=136.05, samples=10 00:29:21.233 lat (msec) : 100=0.09%, 250=3.94%, 500=53.94%, 750=22.94%, 1000=2.33% 00:29:21.233 lat (msec) : 2000=0.09%, >=2000=16.67% 00:29:21.233 cpu : usr=0.06%, sys=1.73%, ctx=2008, majf=0, minf=32769 00:29:21.233 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:29:21.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.233 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.233 issued rwts: total=1116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.233 job5: (groupid=0, jobs=1): err= 0: pid=1239319: Fri Jul 26 20:50:08 2024 00:29:21.233 read: IOPS=44, BW=45.0MiB/s (47.1MB/s)(451MiB/10030msec) 00:29:21.233 slat (usec): min=536, max=2076.5k, avg=22192.21, stdev=156881.80 00:29:21.233 clat (msec): min=18, max=7127, avg=1307.28, stdev=1321.53 00:29:21.234 lat (msec): min=31, max=7148, avg=1329.47, stdev=1348.97 00:29:21.234 clat percentiles (msec): 00:29:21.234 | 1.00th=[ 47], 5.00th=[ 148], 10.00th=[ 393], 20.00th=[ 751], 00:29:21.234 | 30.00th=[ 978], 40.00th=[ 1020], 50.00th=[ 1070], 60.00th=[ 1200], 00:29:21.234 | 70.00th=[ 1267], 80.00th=[ 1351], 90.00th=[ 1418], 95.00th=[ 5403], 00:29:21.234 | 99.00th=[ 7080], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:29:21.234 | 99.99th=[ 7148] 00:29:21.234 bw ( KiB/s): min=79872, max=131072, per=2.78%, avg=110592.00, stdev=20274.17, samples=6 00:29:21.234 iops : min= 78, max= 128, avg=108.00, stdev=19.80, samples=6 00:29:21.234 lat (msec) : 20=0.22%, 50=0.89%, 100=2.66%, 250=3.33%, 500=5.54% 00:29:21.234 lat (msec) : 750=7.54%, 1000=13.30%, 2000=60.31%, >=2000=6.21% 00:29:21.234 cpu : usr=0.00%, sys=1.17%, ctx=1641, majf=0, minf=32769 00:29:21.234 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.0% 00:29:21.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.234 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:29:21.234 issued rwts: total=451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.234 00:29:21.234 Run status group 0 (all jobs): 00:29:21.234 READ: bw=3889MiB/s (4078MB/s), 2177KiB/s-219MiB/s (2229kB/s-230MB/s), io=39.9GiB (42.8GB), run=10013-10499msec 00:29:21.234 00:29:21.234 Disk stats (read/write): 00:29:21.234 nvme0n1: ios=78897/0, merge=0/0, ticks=6108549/0, in_queue=6108549, util=97.91% 00:29:21.234 nvme1n1: ios=41034/0, merge=0/0, ticks=6058509/0, in_queue=6058509, util=98.42% 00:29:21.234 nvme2n1: ios=48671/0, merge=0/0, ticks=5706902/0, in_queue=5706902, util=98.60% 00:29:21.234 nvme3n1: ios=27767/0, merge=0/0, ticks=5251140/0, in_queue=5251140, util=98.68% 00:29:21.234 nvme4n1: ios=55232/0, merge=0/0, ticks=5630928/0, in_queue=5630928, util=98.91% 00:29:21.234 nvme5n1: ios=73082/0, merge=0/0, ticks=6936195/0, in_queue=6936195, util=99.11% 00:29:21.234 20:50:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:29:21.234 20:50:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:29:21.234 20:50:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:21.234 20:50:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:29:21.234 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:21.234 20:50:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:22.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:22.613 20:50:10 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:29:23.551 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:23.551 20:50:11 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:29:24.489 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:24.489 20:50:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:29:25.424 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:29:25.424 20:50:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:29:26.360 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:26.360 rmmod nvme_rdma 00:29:26.360 rmmod nvme_fabrics 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:29:26.360 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:29:26.361 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 1237644 ']' 00:29:26.361 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 1237644 00:29:26.361 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 1237644 ']' 00:29:26.361 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 1237644 00:29:26.361 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:29:26.361 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.361 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237644 00:29:26.620 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:26.620 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:26.620 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237644' 00:29:26.620 killing process with pid 1237644 00:29:26.620 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 1237644 00:29:26.620 20:50:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 1237644 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:26.880 00:29:26.880 real 0m33.989s 00:29:26.880 user 1m51.852s 00:29:26.880 sys 0m18.237s 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:26.880 ************************************ 00:29:26.880 END TEST nvmf_srq_overwhelm 00:29:26.880 ************************************ 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:26.880 ************************************ 00:29:26.880 START TEST nvmf_shutdown 00:29:26.880 ************************************ 00:29:26.880 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:29:27.140 * Looking for test storage... 00:29:27.140 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.140 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:27.141 ************************************ 00:29:27.141 START TEST nvmf_shutdown_tc1 00:29:27.141 ************************************ 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:27.141 20:50:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.264 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:35.265 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:35.265 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:35.265 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:35.265 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:35.265 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:35.265 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:35.265 altname enp217s0f0np0 00:29:35.265 altname ens818f0np0 00:29:35.265 inet 192.168.100.8/24 scope global mlx_0_0 00:29:35.265 valid_lft forever preferred_lft forever 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:35.265 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:35.265 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:35.265 altname enp217s0f1np1 00:29:35.265 altname ens818f1np1 00:29:35.265 inet 192.168.100.9/24 scope global mlx_0_1 00:29:35.265 valid_lft forever preferred_lft forever 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:35.265 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:35.266 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:35.525 192.168.100.9' 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:35.525 192.168.100.9' 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:35.525 192.168.100.9' 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1246043 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1246043 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1246043 ']' 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.525 20:50:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.525 [2024-07-26 20:50:23.920000] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:29:35.525 [2024-07-26 20:50:23.920054] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.525 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.525 [2024-07-26 20:50:24.005602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:35.525 [2024-07-26 20:50:24.044441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.525 [2024-07-26 20:50:24.044486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.525 [2024-07-26 20:50:24.044496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.525 [2024-07-26 20:50:24.044505] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.525 [2024-07-26 20:50:24.044512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.525 [2024-07-26 20:50:24.044619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.525 [2024-07-26 20:50:24.044688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.525 [2024-07-26 20:50:24.044777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.525 [2024-07-26 20:50:24.044778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:36.459 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.459 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:36.459 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:36.459 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:36.459 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.460 [2024-07-26 20:50:24.799053] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14e6160/0x14ea650) succeed. 00:29:36.460 [2024-07-26 20:50:24.808504] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14e77a0/0x152bce0) succeed. 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.460 20:50:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.460 Malloc1 00:29:36.718 [2024-07-26 20:50:25.035986] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:36.718 Malloc2 00:29:36.718 Malloc3 00:29:36.718 Malloc4 00:29:36.718 Malloc5 00:29:36.718 Malloc6 00:29:36.977 Malloc7 00:29:36.977 Malloc8 00:29:36.977 Malloc9 00:29:36.977 Malloc10 00:29:36.977 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.977 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:36.977 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:36.977 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1246348 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1246348 /var/tmp/bdevperf.sock 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1246348 ']' 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:36.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.978 { 00:29:36.978 "params": { 00:29:36.978 "name": "Nvme$subsystem", 00:29:36.978 "trtype": "$TEST_TRANSPORT", 00:29:36.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.978 "adrfam": "ipv4", 00:29:36.978 "trsvcid": "$NVMF_PORT", 00:29:36.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.978 "hdgst": ${hdgst:-false}, 00:29:36.978 "ddgst": ${ddgst:-false} 00:29:36.978 }, 00:29:36.978 "method": "bdev_nvme_attach_controller" 00:29:36.978 } 00:29:36.978 EOF 00:29:36.978 )") 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.978 { 00:29:36.978 "params": { 00:29:36.978 "name": "Nvme$subsystem", 00:29:36.978 "trtype": "$TEST_TRANSPORT", 00:29:36.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.978 "adrfam": "ipv4", 00:29:36.978 "trsvcid": "$NVMF_PORT", 00:29:36.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.978 "hdgst": ${hdgst:-false}, 00:29:36.978 "ddgst": ${ddgst:-false} 00:29:36.978 }, 00:29:36.978 "method": "bdev_nvme_attach_controller" 00:29:36.978 } 00:29:36.978 EOF 00:29:36.978 )") 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.978 { 00:29:36.978 "params": { 00:29:36.978 "name": "Nvme$subsystem", 00:29:36.978 "trtype": "$TEST_TRANSPORT", 00:29:36.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.978 "adrfam": "ipv4", 00:29:36.978 "trsvcid": "$NVMF_PORT", 00:29:36.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.978 "hdgst": ${hdgst:-false}, 00:29:36.978 "ddgst": ${ddgst:-false} 00:29:36.978 }, 00:29:36.978 "method": "bdev_nvme_attach_controller" 00:29:36.978 } 00:29:36.978 EOF 00:29:36.978 )") 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.978 { 00:29:36.978 "params": { 00:29:36.978 "name": "Nvme$subsystem", 00:29:36.978 "trtype": "$TEST_TRANSPORT", 00:29:36.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.978 "adrfam": "ipv4", 00:29:36.978 "trsvcid": "$NVMF_PORT", 00:29:36.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.978 "hdgst": ${hdgst:-false}, 00:29:36.978 "ddgst": ${ddgst:-false} 00:29:36.978 }, 00:29:36.978 "method": "bdev_nvme_attach_controller" 00:29:36.978 } 00:29:36.978 EOF 00:29:36.978 )") 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.978 { 00:29:36.978 "params": { 00:29:36.978 "name": "Nvme$subsystem", 00:29:36.978 "trtype": "$TEST_TRANSPORT", 00:29:36.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.978 "adrfam": "ipv4", 00:29:36.978 "trsvcid": "$NVMF_PORT", 00:29:36.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.978 "hdgst": ${hdgst:-false}, 00:29:36.978 "ddgst": ${ddgst:-false} 00:29:36.978 }, 00:29:36.978 "method": "bdev_nvme_attach_controller" 00:29:36.978 } 00:29:36.978 EOF 00:29:36.978 )") 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:36.978 [2024-07-26 20:50:25.520811] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:29:36.978 [2024-07-26 20:50:25.520866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:36.978 { 00:29:36.978 "params": { 00:29:36.978 "name": "Nvme$subsystem", 00:29:36.978 "trtype": "$TEST_TRANSPORT", 00:29:36.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.978 "adrfam": "ipv4", 00:29:36.978 "trsvcid": "$NVMF_PORT", 00:29:36.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.978 "hdgst": ${hdgst:-false}, 00:29:36.978 "ddgst": ${ddgst:-false} 00:29:36.978 }, 00:29:36.978 "method": "bdev_nvme_attach_controller" 00:29:36.978 } 00:29:36.978 EOF 00:29:36.978 )") 00:29:36.978 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.238 { 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme$subsystem", 00:29:37.238 "trtype": "$TEST_TRANSPORT", 00:29:37.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "$NVMF_PORT", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.238 "hdgst": ${hdgst:-false}, 00:29:37.238 "ddgst": ${ddgst:-false} 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 } 00:29:37.238 EOF 00:29:37.238 )") 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.238 { 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme$subsystem", 00:29:37.238 "trtype": "$TEST_TRANSPORT", 00:29:37.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "$NVMF_PORT", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.238 "hdgst": ${hdgst:-false}, 00:29:37.238 "ddgst": ${ddgst:-false} 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 } 00:29:37.238 EOF 00:29:37.238 )") 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.238 { 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme$subsystem", 00:29:37.238 "trtype": "$TEST_TRANSPORT", 00:29:37.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "$NVMF_PORT", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.238 "hdgst": ${hdgst:-false}, 00:29:37.238 "ddgst": ${ddgst:-false} 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 } 00:29:37.238 EOF 00:29:37.238 )") 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.238 { 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme$subsystem", 00:29:37.238 "trtype": "$TEST_TRANSPORT", 00:29:37.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "$NVMF_PORT", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.238 "hdgst": ${hdgst:-false}, 00:29:37.238 "ddgst": ${ddgst:-false} 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 } 00:29:37.238 EOF 00:29:37.238 )") 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:37.238 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:37.238 20:50:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme1", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme2", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme3", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme4", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme5", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme6", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme7", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme8", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme9", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:37.238 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:37.238 "hdgst": false, 00:29:37.238 "ddgst": false 00:29:37.238 }, 00:29:37.238 "method": "bdev_nvme_attach_controller" 00:29:37.238 },{ 00:29:37.238 "params": { 00:29:37.238 "name": "Nvme10", 00:29:37.238 "trtype": "rdma", 00:29:37.238 "traddr": "192.168.100.8", 00:29:37.238 "adrfam": "ipv4", 00:29:37.238 "trsvcid": "4420", 00:29:37.238 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:37.239 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:37.239 "hdgst": false, 00:29:37.239 "ddgst": false 00:29:37.239 }, 00:29:37.239 "method": "bdev_nvme_attach_controller" 00:29:37.239 }' 00:29:37.239 [2024-07-26 20:50:25.610831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.239 [2024-07-26 20:50:25.649947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1246348 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:38.174 20:50:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:39.111 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1246348 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1246043 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.111 { 00:29:39.111 "params": { 00:29:39.111 "name": "Nvme$subsystem", 00:29:39.111 "trtype": "$TEST_TRANSPORT", 00:29:39.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.111 "adrfam": "ipv4", 00:29:39.111 "trsvcid": "$NVMF_PORT", 00:29:39.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.111 "hdgst": ${hdgst:-false}, 00:29:39.111 "ddgst": ${ddgst:-false} 00:29:39.111 }, 00:29:39.111 "method": "bdev_nvme_attach_controller" 00:29:39.111 } 00:29:39.111 EOF 00:29:39.111 )") 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.111 { 00:29:39.111 "params": { 00:29:39.111 "name": "Nvme$subsystem", 00:29:39.111 "trtype": "$TEST_TRANSPORT", 00:29:39.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.111 "adrfam": "ipv4", 00:29:39.111 "trsvcid": "$NVMF_PORT", 00:29:39.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.111 "hdgst": ${hdgst:-false}, 00:29:39.111 "ddgst": ${ddgst:-false} 00:29:39.111 }, 00:29:39.111 "method": "bdev_nvme_attach_controller" 00:29:39.111 } 00:29:39.111 EOF 00:29:39.111 )") 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.111 { 00:29:39.111 "params": { 00:29:39.111 "name": "Nvme$subsystem", 00:29:39.111 "trtype": "$TEST_TRANSPORT", 00:29:39.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.111 "adrfam": "ipv4", 00:29:39.111 "trsvcid": "$NVMF_PORT", 00:29:39.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.111 "hdgst": ${hdgst:-false}, 00:29:39.111 "ddgst": ${ddgst:-false} 00:29:39.111 }, 00:29:39.111 "method": "bdev_nvme_attach_controller" 00:29:39.111 } 00:29:39.111 EOF 00:29:39.111 )") 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.111 { 00:29:39.111 "params": { 00:29:39.111 "name": "Nvme$subsystem", 00:29:39.111 "trtype": "$TEST_TRANSPORT", 00:29:39.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.111 "adrfam": "ipv4", 00:29:39.111 "trsvcid": "$NVMF_PORT", 00:29:39.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.111 "hdgst": ${hdgst:-false}, 00:29:39.111 "ddgst": ${ddgst:-false} 00:29:39.111 }, 00:29:39.111 "method": "bdev_nvme_attach_controller" 00:29:39.111 } 00:29:39.111 EOF 00:29:39.111 )") 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.111 { 00:29:39.111 "params": { 00:29:39.111 "name": "Nvme$subsystem", 00:29:39.111 "trtype": "$TEST_TRANSPORT", 00:29:39.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.111 "adrfam": "ipv4", 00:29:39.111 "trsvcid": "$NVMF_PORT", 00:29:39.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.111 "hdgst": ${hdgst:-false}, 00:29:39.111 "ddgst": ${ddgst:-false} 00:29:39.111 }, 00:29:39.111 "method": "bdev_nvme_attach_controller" 00:29:39.111 } 00:29:39.111 EOF 00:29:39.111 )") 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.111 { 00:29:39.111 "params": { 00:29:39.111 "name": "Nvme$subsystem", 00:29:39.111 "trtype": "$TEST_TRANSPORT", 00:29:39.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.111 "adrfam": "ipv4", 00:29:39.111 "trsvcid": "$NVMF_PORT", 00:29:39.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.111 "hdgst": ${hdgst:-false}, 00:29:39.111 "ddgst": ${ddgst:-false} 00:29:39.111 }, 00:29:39.111 "method": "bdev_nvme_attach_controller" 00:29:39.111 } 00:29:39.111 EOF 00:29:39.111 )") 00:29:39.111 [2024-07-26 20:50:27.566921] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:29:39.111 [2024-07-26 20:50:27.566976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246674 ] 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.111 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.111 { 00:29:39.111 "params": { 00:29:39.111 "name": "Nvme$subsystem", 00:29:39.111 "trtype": "$TEST_TRANSPORT", 00:29:39.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.111 "adrfam": "ipv4", 00:29:39.111 "trsvcid": "$NVMF_PORT", 00:29:39.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.111 "hdgst": ${hdgst:-false}, 00:29:39.111 "ddgst": ${ddgst:-false} 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 } 00:29:39.112 EOF 00:29:39.112 )") 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.112 { 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme$subsystem", 00:29:39.112 "trtype": "$TEST_TRANSPORT", 00:29:39.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "$NVMF_PORT", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.112 "hdgst": ${hdgst:-false}, 00:29:39.112 "ddgst": ${ddgst:-false} 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 } 00:29:39.112 EOF 00:29:39.112 )") 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.112 { 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme$subsystem", 00:29:39.112 "trtype": "$TEST_TRANSPORT", 00:29:39.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "$NVMF_PORT", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.112 "hdgst": ${hdgst:-false}, 00:29:39.112 "ddgst": ${ddgst:-false} 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 } 00:29:39.112 EOF 00:29:39.112 )") 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:39.112 { 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme$subsystem", 00:29:39.112 "trtype": "$TEST_TRANSPORT", 00:29:39.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "$NVMF_PORT", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.112 "hdgst": ${hdgst:-false}, 00:29:39.112 "ddgst": ${ddgst:-false} 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 } 00:29:39.112 EOF 00:29:39.112 )") 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:39.112 20:50:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme1", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme2", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme3", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme4", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme5", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme6", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme7", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme8", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme9", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 },{ 00:29:39.112 "params": { 00:29:39.112 "name": "Nvme10", 00:29:39.112 "trtype": "rdma", 00:29:39.112 "traddr": "192.168.100.8", 00:29:39.112 "adrfam": "ipv4", 00:29:39.112 "trsvcid": "4420", 00:29:39.112 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:39.112 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:39.112 "hdgst": false, 00:29:39.112 "ddgst": false 00:29:39.112 }, 00:29:39.112 "method": "bdev_nvme_attach_controller" 00:29:39.112 }' 00:29:39.112 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.112 [2024-07-26 20:50:27.658164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.371 [2024-07-26 20:50:27.697575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.306 Running I/O for 1 seconds... 00:29:41.244 00:29:41.244 Latency(us) 00:29:41.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.244 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme1n1 : 1.17 397.05 24.82 0.00 0.00 159285.33 6186.60 208037.48 00:29:41.244 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme2n1 : 1.17 391.66 24.48 0.00 0.00 158726.25 9332.33 156866.97 00:29:41.244 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme3n1 : 1.17 404.99 25.31 0.00 0.00 151780.60 9384.76 148478.36 00:29:41.244 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme4n1 : 1.17 398.72 24.92 0.00 0.00 151889.01 4771.02 137573.17 00:29:41.244 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme5n1 : 1.17 389.08 24.32 0.00 0.00 153407.36 9122.61 124990.26 00:29:41.244 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme6n1 : 1.17 387.14 24.20 0.00 0.00 152045.71 8860.47 112407.35 00:29:41.244 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme7n1 : 1.17 400.45 25.03 0.00 0.00 145244.06 8860.47 102760.45 00:29:41.244 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme8n1 : 1.17 403.52 25.22 0.00 0.00 142172.31 8493.47 96468.99 00:29:41.244 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme9n1 : 1.18 382.03 23.88 0.00 0.00 147592.62 8650.75 105277.03 00:29:41.244 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.244 Verification LBA range: start 0x0 length 0x400 00:29:41.244 Nvme10n1 : 1.17 328.93 20.56 0.00 0.00 170080.67 8912.90 211392.92 00:29:41.244 =================================================================================================================== 00:29:41.244 Total : 3883.57 242.72 0.00 0.00 152900.69 4771.02 211392.92 00:29:41.504 20:50:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:41.504 20:50:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:41.504 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:41.504 rmmod nvme_rdma 00:29:41.504 rmmod nvme_fabrics 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1246043 ']' 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1246043 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1246043 ']' 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1246043 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1246043 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1246043' 00:29:41.799 killing process with pid 1246043 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1246043 00:29:41.799 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1246043 00:29:42.059 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:42.059 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:42.059 00:29:42.059 real 0m15.019s 00:29:42.059 user 0m31.080s 00:29:42.059 sys 0m7.504s 00:29:42.059 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:42.059 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:42.059 ************************************ 00:29:42.059 END TEST nvmf_shutdown_tc1 00:29:42.059 ************************************ 00:29:42.059 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:42.059 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:42.059 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:42.059 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.319 ************************************ 00:29:42.319 START TEST nvmf_shutdown_tc2 00:29:42.319 ************************************ 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:42.319 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:42.319 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:42.319 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:42.320 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:42.320 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:42.320 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:42.320 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:42.320 altname enp217s0f0np0 00:29:42.320 altname ens818f0np0 00:29:42.320 inet 192.168.100.8/24 scope global mlx_0_0 00:29:42.320 valid_lft forever preferred_lft forever 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:42.320 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:42.320 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:42.320 altname enp217s0f1np1 00:29:42.320 altname ens818f1np1 00:29:42.320 inet 192.168.100.9/24 scope global mlx_0_1 00:29:42.320 valid_lft forever preferred_lft forever 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:42.320 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:42.321 192.168.100.9' 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:42.321 192.168.100.9' 00:29:42.321 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:42.580 192.168.100.9' 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1247312 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1247312 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1247312 ']' 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:42.580 20:50:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.580 [2024-07-26 20:50:30.964294] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:29:42.580 [2024-07-26 20:50:30.964342] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.580 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.580 [2024-07-26 20:50:31.048803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.580 [2024-07-26 20:50:31.089567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.580 [2024-07-26 20:50:31.089605] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.580 [2024-07-26 20:50:31.089615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.580 [2024-07-26 20:50:31.089623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.580 [2024-07-26 20:50:31.089635] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.580 [2024-07-26 20:50:31.089741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.580 [2024-07-26 20:50:31.089825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.580 [2024-07-26 20:50:31.089934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.580 [2024-07-26 20:50:31.089936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:43.516 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:43.516 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:43.516 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:43.516 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:43.516 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.516 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.516 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.517 [2024-07-26 20:50:31.851448] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fea160/0x1fee650) succeed. 00:29:43.517 [2024-07-26 20:50:31.860673] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1feb7a0/0x202fce0) succeed. 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.517 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.517 Malloc1 00:29:43.775 [2024-07-26 20:50:32.082676] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:43.775 Malloc2 00:29:43.775 Malloc3 00:29:43.775 Malloc4 00:29:43.775 Malloc5 00:29:43.775 Malloc6 00:29:44.034 Malloc7 00:29:44.034 Malloc8 00:29:44.034 Malloc9 00:29:44.034 Malloc10 00:29:44.034 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.034 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:44.034 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:44.034 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.034 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1247634 00:29:44.034 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1247634 /var/tmp/bdevperf.sock 00:29:44.034 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1247634 ']' 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:44.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.035 { 00:29:44.035 "params": { 00:29:44.035 "name": "Nvme$subsystem", 00:29:44.035 "trtype": "$TEST_TRANSPORT", 00:29:44.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.035 "adrfam": "ipv4", 00:29:44.035 "trsvcid": "$NVMF_PORT", 00:29:44.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.035 "hdgst": ${hdgst:-false}, 00:29:44.035 "ddgst": ${ddgst:-false} 00:29:44.035 }, 00:29:44.035 "method": "bdev_nvme_attach_controller" 00:29:44.035 } 00:29:44.035 EOF 00:29:44.035 )") 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.035 { 00:29:44.035 "params": { 00:29:44.035 "name": "Nvme$subsystem", 00:29:44.035 "trtype": "$TEST_TRANSPORT", 00:29:44.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.035 "adrfam": "ipv4", 00:29:44.035 "trsvcid": "$NVMF_PORT", 00:29:44.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.035 "hdgst": ${hdgst:-false}, 00:29:44.035 "ddgst": ${ddgst:-false} 00:29:44.035 }, 00:29:44.035 "method": "bdev_nvme_attach_controller" 00:29:44.035 } 00:29:44.035 EOF 00:29:44.035 )") 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.035 { 00:29:44.035 "params": { 00:29:44.035 "name": "Nvme$subsystem", 00:29:44.035 "trtype": "$TEST_TRANSPORT", 00:29:44.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.035 "adrfam": "ipv4", 00:29:44.035 "trsvcid": "$NVMF_PORT", 00:29:44.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.035 "hdgst": ${hdgst:-false}, 00:29:44.035 "ddgst": ${ddgst:-false} 00:29:44.035 }, 00:29:44.035 "method": "bdev_nvme_attach_controller" 00:29:44.035 } 00:29:44.035 EOF 00:29:44.035 )") 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.035 { 00:29:44.035 "params": { 00:29:44.035 "name": "Nvme$subsystem", 00:29:44.035 "trtype": "$TEST_TRANSPORT", 00:29:44.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.035 "adrfam": "ipv4", 00:29:44.035 "trsvcid": "$NVMF_PORT", 00:29:44.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.035 "hdgst": ${hdgst:-false}, 00:29:44.035 "ddgst": ${ddgst:-false} 00:29:44.035 }, 00:29:44.035 "method": "bdev_nvme_attach_controller" 00:29:44.035 } 00:29:44.035 EOF 00:29:44.035 )") 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.035 { 00:29:44.035 "params": { 00:29:44.035 "name": "Nvme$subsystem", 00:29:44.035 "trtype": "$TEST_TRANSPORT", 00:29:44.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.035 "adrfam": "ipv4", 00:29:44.035 "trsvcid": "$NVMF_PORT", 00:29:44.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.035 "hdgst": ${hdgst:-false}, 00:29:44.035 "ddgst": ${ddgst:-false} 00:29:44.035 }, 00:29:44.035 "method": "bdev_nvme_attach_controller" 00:29:44.035 } 00:29:44.035 EOF 00:29:44.035 )") 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.035 [2024-07-26 20:50:32.568774] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:29:44.035 [2024-07-26 20:50:32.568831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247634 ] 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.035 { 00:29:44.035 "params": { 00:29:44.035 "name": "Nvme$subsystem", 00:29:44.035 "trtype": "$TEST_TRANSPORT", 00:29:44.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.035 "adrfam": "ipv4", 00:29:44.035 "trsvcid": "$NVMF_PORT", 00:29:44.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.035 "hdgst": ${hdgst:-false}, 00:29:44.035 "ddgst": ${ddgst:-false} 00:29:44.035 }, 00:29:44.035 "method": "bdev_nvme_attach_controller" 00:29:44.035 } 00:29:44.035 EOF 00:29:44.035 )") 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.035 { 00:29:44.035 "params": { 00:29:44.035 "name": "Nvme$subsystem", 00:29:44.035 "trtype": "$TEST_TRANSPORT", 00:29:44.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.035 "adrfam": "ipv4", 00:29:44.035 "trsvcid": "$NVMF_PORT", 00:29:44.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.035 "hdgst": ${hdgst:-false}, 00:29:44.035 "ddgst": ${ddgst:-false} 00:29:44.035 }, 00:29:44.035 "method": "bdev_nvme_attach_controller" 00:29:44.035 } 00:29:44.035 EOF 00:29:44.035 )") 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.035 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.035 { 00:29:44.035 "params": { 00:29:44.035 "name": "Nvme$subsystem", 00:29:44.035 "trtype": "$TEST_TRANSPORT", 00:29:44.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.035 "adrfam": "ipv4", 00:29:44.035 "trsvcid": "$NVMF_PORT", 00:29:44.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.035 "hdgst": ${hdgst:-false}, 00:29:44.035 "ddgst": ${ddgst:-false} 00:29:44.035 }, 00:29:44.035 "method": "bdev_nvme_attach_controller" 00:29:44.035 } 00:29:44.035 EOF 00:29:44.035 )") 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.294 { 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme$subsystem", 00:29:44.294 "trtype": "$TEST_TRANSPORT", 00:29:44.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "$NVMF_PORT", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.294 "hdgst": ${hdgst:-false}, 00:29:44.294 "ddgst": ${ddgst:-false} 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 } 00:29:44.294 EOF 00:29:44.294 )") 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:44.294 { 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme$subsystem", 00:29:44.294 "trtype": "$TEST_TRANSPORT", 00:29:44.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "$NVMF_PORT", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.294 "hdgst": ${hdgst:-false}, 00:29:44.294 "ddgst": ${ddgst:-false} 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 } 00:29:44.294 EOF 00:29:44.294 )") 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:44.294 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:44.294 20:50:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme1", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme2", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme3", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme4", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme5", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme6", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme7", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme8", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme9", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 },{ 00:29:44.294 "params": { 00:29:44.294 "name": "Nvme10", 00:29:44.294 "trtype": "rdma", 00:29:44.294 "traddr": "192.168.100.8", 00:29:44.294 "adrfam": "ipv4", 00:29:44.294 "trsvcid": "4420", 00:29:44.294 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:44.294 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:44.294 "hdgst": false, 00:29:44.294 "ddgst": false 00:29:44.294 }, 00:29:44.294 "method": "bdev_nvme_attach_controller" 00:29:44.294 }' 00:29:44.294 [2024-07-26 20:50:32.657818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.294 [2024-07-26 20:50:32.696390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.239 Running I/O for 10 seconds... 00:29:45.239 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.240 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:45.498 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.498 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=19 00:29:45.498 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 19 -ge 100 ']' 00:29:45.498 20:50:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=171 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 171 -ge 100 ']' 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1247634 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1247634 ']' 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1247634 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:45.757 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1247634 00:29:46.016 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:46.016 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:46.016 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1247634' 00:29:46.016 killing process with pid 1247634 00:29:46.016 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1247634 00:29:46.016 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1247634 00:29:46.016 Received shutdown signal, test time was about 0.819933 seconds 00:29:46.016 00:29:46.016 Latency(us) 00:29:46.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.016 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme1n1 : 0.81 367.17 22.95 0.00 0.00 170473.08 5662.31 233203.30 00:29:46.016 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme2n1 : 0.81 396.39 24.77 0.00 0.00 154757.00 6501.17 161900.13 00:29:46.016 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme3n1 : 0.81 395.84 24.74 0.00 0.00 152013.05 7811.89 155189.25 00:29:46.016 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme4n1 : 0.81 396.57 24.79 0.00 0.00 148618.95 4875.88 150156.08 00:29:46.016 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme5n1 : 0.81 394.68 24.67 0.00 0.00 146936.79 8493.47 139250.89 00:29:46.016 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme6n1 : 0.81 394.14 24.63 0.00 0.00 143674.74 8860.47 132540.01 00:29:46.016 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme7n1 : 0.81 393.58 24.60 0.00 0.00 140977.27 9175.04 124990.26 00:29:46.016 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme8n1 : 0.81 393.00 24.56 0.00 0.00 138248.27 9542.04 115762.79 00:29:46.016 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme9n1 : 0.82 392.37 24.52 0.00 0.00 135742.42 10066.33 105277.03 00:29:46.016 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.016 Verification LBA range: start 0x0 length 0x400 00:29:46.016 Nvme10n1 : 0.82 312.46 19.53 0.00 0.00 166945.66 2254.44 236558.75 00:29:46.016 =================================================================================================================== 00:29:46.016 Total : 3836.21 239.76 0.00 0.00 149327.55 2254.44 236558.75 00:29:46.275 20:50:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1247312 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:47.209 rmmod nvme_rdma 00:29:47.209 rmmod nvme_fabrics 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1247312 ']' 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1247312 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1247312 ']' 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1247312 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1247312 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1247312' 00:29:47.209 killing process with pid 1247312 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1247312 00:29:47.209 20:50:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1247312 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:47.778 00:29:47.778 real 0m5.552s 00:29:47.778 user 0m22.287s 00:29:47.778 sys 0m1.229s 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.778 ************************************ 00:29:47.778 END TEST nvmf_shutdown_tc2 00:29:47.778 ************************************ 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:47.778 ************************************ 00:29:47.778 START TEST nvmf_shutdown_tc3 00:29:47.778 ************************************ 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:47.778 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:47.778 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:47.778 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:47.779 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:47.779 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:47.779 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:48.038 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:48.039 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:48.039 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:48.039 altname enp217s0f0np0 00:29:48.039 altname ens818f0np0 00:29:48.039 inet 192.168.100.8/24 scope global mlx_0_0 00:29:48.039 valid_lft forever preferred_lft forever 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:48.039 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:48.039 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:48.039 altname enp217s0f1np1 00:29:48.039 altname ens818f1np1 00:29:48.039 inet 192.168.100.9/24 scope global mlx_0_1 00:29:48.039 valid_lft forever preferred_lft forever 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:48.039 192.168.100.9' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:48.039 192.168.100.9' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:48.039 192.168.100.9' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1248392 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1248392 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1248392 ']' 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.039 20:50:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.298 [2024-07-26 20:50:36.617555] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:29:48.298 [2024-07-26 20:50:36.617605] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.298 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.298 [2024-07-26 20:50:36.703504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.298 [2024-07-26 20:50:36.742455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.298 [2024-07-26 20:50:36.742500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.298 [2024-07-26 20:50:36.742510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.298 [2024-07-26 20:50:36.742519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.298 [2024-07-26 20:50:36.742526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.298 [2024-07-26 20:50:36.742652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.298 [2024-07-26 20:50:36.742736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.298 [2024-07-26 20:50:36.742848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.298 [2024-07-26 20:50:36.742849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.235 [2024-07-26 20:50:37.497435] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17f1160/0x17f5650) succeed. 00:29:49.235 [2024-07-26 20:50:37.506669] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17f27a0/0x1836ce0) succeed. 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.235 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.236 20:50:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.236 Malloc1 00:29:49.236 [2024-07-26 20:50:37.718860] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:49.236 Malloc2 00:29:49.236 Malloc3 00:29:49.495 Malloc4 00:29:49.495 Malloc5 00:29:49.495 Malloc6 00:29:49.495 Malloc7 00:29:49.495 Malloc8 00:29:49.495 Malloc9 00:29:49.755 Malloc10 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1248714 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1248714 /var/tmp/bdevperf.sock 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1248714 ']' 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:49.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.755 { 00:29:49.755 "params": { 00:29:49.755 "name": "Nvme$subsystem", 00:29:49.755 "trtype": "$TEST_TRANSPORT", 00:29:49.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.755 "adrfam": "ipv4", 00:29:49.755 "trsvcid": "$NVMF_PORT", 00:29:49.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.755 "hdgst": ${hdgst:-false}, 00:29:49.755 "ddgst": ${ddgst:-false} 00:29:49.755 }, 00:29:49.755 "method": "bdev_nvme_attach_controller" 00:29:49.755 } 00:29:49.755 EOF 00:29:49.755 )") 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.755 { 00:29:49.755 "params": { 00:29:49.755 "name": "Nvme$subsystem", 00:29:49.755 "trtype": "$TEST_TRANSPORT", 00:29:49.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.755 "adrfam": "ipv4", 00:29:49.755 "trsvcid": "$NVMF_PORT", 00:29:49.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.755 "hdgst": ${hdgst:-false}, 00:29:49.755 "ddgst": ${ddgst:-false} 00:29:49.755 }, 00:29:49.755 "method": "bdev_nvme_attach_controller" 00:29:49.755 } 00:29:49.755 EOF 00:29:49.755 )") 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.755 { 00:29:49.755 "params": { 00:29:49.755 "name": "Nvme$subsystem", 00:29:49.755 "trtype": "$TEST_TRANSPORT", 00:29:49.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.755 "adrfam": "ipv4", 00:29:49.755 "trsvcid": "$NVMF_PORT", 00:29:49.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.755 "hdgst": ${hdgst:-false}, 00:29:49.755 "ddgst": ${ddgst:-false} 00:29:49.755 }, 00:29:49.755 "method": "bdev_nvme_attach_controller" 00:29:49.755 } 00:29:49.755 EOF 00:29:49.755 )") 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.755 { 00:29:49.755 "params": { 00:29:49.755 "name": "Nvme$subsystem", 00:29:49.755 "trtype": "$TEST_TRANSPORT", 00:29:49.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.755 "adrfam": "ipv4", 00:29:49.755 "trsvcid": "$NVMF_PORT", 00:29:49.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.755 "hdgst": ${hdgst:-false}, 00:29:49.755 "ddgst": ${ddgst:-false} 00:29:49.755 }, 00:29:49.755 "method": "bdev_nvme_attach_controller" 00:29:49.755 } 00:29:49.755 EOF 00:29:49.755 )") 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.755 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.755 { 00:29:49.755 "params": { 00:29:49.755 "name": "Nvme$subsystem", 00:29:49.755 "trtype": "$TEST_TRANSPORT", 00:29:49.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.755 "adrfam": "ipv4", 00:29:49.755 "trsvcid": "$NVMF_PORT", 00:29:49.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.755 "hdgst": ${hdgst:-false}, 00:29:49.756 "ddgst": ${ddgst:-false} 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 } 00:29:49.756 EOF 00:29:49.756 )") 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.756 [2024-07-26 20:50:38.200787] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:29:49.756 [2024-07-26 20:50:38.200843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248714 ] 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.756 { 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme$subsystem", 00:29:49.756 "trtype": "$TEST_TRANSPORT", 00:29:49.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "$NVMF_PORT", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.756 "hdgst": ${hdgst:-false}, 00:29:49.756 "ddgst": ${ddgst:-false} 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 } 00:29:49.756 EOF 00:29:49.756 )") 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.756 { 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme$subsystem", 00:29:49.756 "trtype": "$TEST_TRANSPORT", 00:29:49.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "$NVMF_PORT", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.756 "hdgst": ${hdgst:-false}, 00:29:49.756 "ddgst": ${ddgst:-false} 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 } 00:29:49.756 EOF 00:29:49.756 )") 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.756 { 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme$subsystem", 00:29:49.756 "trtype": "$TEST_TRANSPORT", 00:29:49.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "$NVMF_PORT", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.756 "hdgst": ${hdgst:-false}, 00:29:49.756 "ddgst": ${ddgst:-false} 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 } 00:29:49.756 EOF 00:29:49.756 )") 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.756 { 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme$subsystem", 00:29:49.756 "trtype": "$TEST_TRANSPORT", 00:29:49.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "$NVMF_PORT", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.756 "hdgst": ${hdgst:-false}, 00:29:49.756 "ddgst": ${ddgst:-false} 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 } 00:29:49.756 EOF 00:29:49.756 )") 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.756 { 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme$subsystem", 00:29:49.756 "trtype": "$TEST_TRANSPORT", 00:29:49.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "$NVMF_PORT", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.756 "hdgst": ${hdgst:-false}, 00:29:49.756 "ddgst": ${ddgst:-false} 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 } 00:29:49.756 EOF 00:29:49.756 )") 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.756 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:49.756 20:50:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme1", 00:29:49.756 "trtype": "rdma", 00:29:49.756 "traddr": "192.168.100.8", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "4420", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.756 "hdgst": false, 00:29:49.756 "ddgst": false 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 },{ 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme2", 00:29:49.756 "trtype": "rdma", 00:29:49.756 "traddr": "192.168.100.8", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "4420", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:49.756 "hdgst": false, 00:29:49.756 "ddgst": false 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 },{ 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme3", 00:29:49.756 "trtype": "rdma", 00:29:49.756 "traddr": "192.168.100.8", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "4420", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:49.756 "hdgst": false, 00:29:49.756 "ddgst": false 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 },{ 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme4", 00:29:49.756 "trtype": "rdma", 00:29:49.756 "traddr": "192.168.100.8", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "4420", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:49.756 "hdgst": false, 00:29:49.756 "ddgst": false 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 },{ 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme5", 00:29:49.756 "trtype": "rdma", 00:29:49.756 "traddr": "192.168.100.8", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "4420", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:49.756 "hdgst": false, 00:29:49.756 "ddgst": false 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 },{ 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme6", 00:29:49.756 "trtype": "rdma", 00:29:49.756 "traddr": "192.168.100.8", 00:29:49.756 "adrfam": "ipv4", 00:29:49.756 "trsvcid": "4420", 00:29:49.756 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:49.756 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:49.756 "hdgst": false, 00:29:49.756 "ddgst": false 00:29:49.756 }, 00:29:49.756 "method": "bdev_nvme_attach_controller" 00:29:49.756 },{ 00:29:49.756 "params": { 00:29:49.756 "name": "Nvme7", 00:29:49.756 "trtype": "rdma", 00:29:49.757 "traddr": "192.168.100.8", 00:29:49.757 "adrfam": "ipv4", 00:29:49.757 "trsvcid": "4420", 00:29:49.757 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:49.757 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:49.757 "hdgst": false, 00:29:49.757 "ddgst": false 00:29:49.757 }, 00:29:49.757 "method": "bdev_nvme_attach_controller" 00:29:49.757 },{ 00:29:49.757 "params": { 00:29:49.757 "name": "Nvme8", 00:29:49.757 "trtype": "rdma", 00:29:49.757 "traddr": "192.168.100.8", 00:29:49.757 "adrfam": "ipv4", 00:29:49.757 "trsvcid": "4420", 00:29:49.757 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:49.757 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:49.757 "hdgst": false, 00:29:49.757 "ddgst": false 00:29:49.757 }, 00:29:49.757 "method": "bdev_nvme_attach_controller" 00:29:49.757 },{ 00:29:49.757 "params": { 00:29:49.757 "name": "Nvme9", 00:29:49.757 "trtype": "rdma", 00:29:49.757 "traddr": "192.168.100.8", 00:29:49.757 "adrfam": "ipv4", 00:29:49.757 "trsvcid": "4420", 00:29:49.757 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:49.757 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:49.757 "hdgst": false, 00:29:49.757 "ddgst": false 00:29:49.757 }, 00:29:49.757 "method": "bdev_nvme_attach_controller" 00:29:49.757 },{ 00:29:49.757 "params": { 00:29:49.757 "name": "Nvme10", 00:29:49.757 "trtype": "rdma", 00:29:49.757 "traddr": "192.168.100.8", 00:29:49.757 "adrfam": "ipv4", 00:29:49.757 "trsvcid": "4420", 00:29:49.757 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:49.757 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:49.757 "hdgst": false, 00:29:49.757 "ddgst": false 00:29:49.757 }, 00:29:49.757 "method": "bdev_nvme_attach_controller" 00:29:49.757 }' 00:29:49.757 [2024-07-26 20:50:38.287926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.016 [2024-07-26 20:50:38.326538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.953 Running I/O for 10 seconds... 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:50.953 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:50.954 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:50.954 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:50.954 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:50.954 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:50.954 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.954 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.213 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.213 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=26 00:29:51.213 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 26 -ge 100 ']' 00:29:51.213 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=176 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 176 -ge 100 ']' 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1248392 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1248392 ']' 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1248392 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1248392 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1248392' 00:29:51.472 killing process with pid 1248392 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1248392 00:29:51.472 20:50:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1248392 00:29:52.040 20:50:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:52.040 20:50:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:52.609 [2024-07-26 20:50:40.982957] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:29:52.609 [2024-07-26 20:50:40.985232] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:29:52.609 [2024-07-26 20:50:40.987700] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:29:52.609 [2024-07-26 20:50:40.990110] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:29:52.609 [2024-07-26 20:50:40.992686] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:29:52.609 [2024-07-26 20:50:40.995277] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:29:52.609 [2024-07-26 20:50:40.997509] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:29:52.609 [2024-07-26 20:50:41.000406] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:29:52.609 [2024-07-26 20:50:41.000511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.000552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.000600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.000642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.000681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.000731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.000769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.000811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.000848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.000880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.000918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.000950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.000988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x184100 00:29:52.609 [2024-07-26 20:50:41.001765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.609 [2024-07-26 20:50:41.001781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x184100 00:29:52.610 [2024-07-26 20:50:41.001794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.001808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.001821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.001837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.001850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.001865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.001877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.001893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.001906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.001921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.001934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.001949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.001962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.001977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.001990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x184000 00:29:52.610 [2024-07-26 20:50:41.002677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x184200 00:29:52.610 [2024-07-26 20:50:41.002705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183300 00:29:52.610 [2024-07-26 20:50:41.002733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.610 [2024-07-26 20:50:41.002749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100b3000 len:0x10000 key:0x184400 00:29:52.610 [2024-07-26 20:50:41.002761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.002777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010092000 len:0x10000 key:0x184400 00:29:52.611 [2024-07-26 20:50:41.002790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005140] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:29:52.611 [2024-07-26 20:50:41.005196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.005953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.005985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.006054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x184200 00:29:52.611 [2024-07-26 20:50:41.006123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.611 [2024-07-26 20:50:41.006743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183c00 00:29:52.611 [2024-07-26 20:50:41.006756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.006784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.006812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.006839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.006869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.006897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.006924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.006953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.006982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.006998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.007011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.007038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183c00 00:29:52.612 [2024-07-26 20:50:41.007066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184500 00:29:52.612 [2024-07-26 20:50:41.007607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.007622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x184200 00:29:52.612 [2024-07-26 20:50:41.007640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:63b1f000 sqhd:52b0 p:0 m:0 dnr:0 00:29:52.612 [2024-07-26 20:50:41.010465] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:29:52.612 [2024-07-26 20:50:41.010550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.612 [2024-07-26 20:50:41.010567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.613 [2024-07-26 20:50:41.010582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.010596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.613 [2024-07-26 20:50:41.010612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.010647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.613 [2024-07-26 20:50:41.010661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.010674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.613 [2024-07-26 20:50:41.013164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.613 [2024-07-26 20:50:41.013206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:52.613 [2024-07-26 20:50:41.013238] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.613 [2024-07-26 20:50:41.013284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.013317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.013351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.013383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.013416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.013449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.013483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.013515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.016023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.613 [2024-07-26 20:50:41.016066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:52.613 [2024-07-26 20:50:41.016104] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.613 [2024-07-26 20:50:41.016153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.016186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.016220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.016252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.016285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.016316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.016349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.016381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.018871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.613 [2024-07-26 20:50:41.018914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:52.613 [2024-07-26 20:50:41.018927] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.613 [2024-07-26 20:50:41.018947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.018960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.018974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.018988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.019001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.019014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.019028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.019041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.021082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.613 [2024-07-26 20:50:41.021122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:52.613 [2024-07-26 20:50:41.021151] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.613 [2024-07-26 20:50:41.021170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.021184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.021198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.021211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.021228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.021241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.021255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.021268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.023206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.613 [2024-07-26 20:50:41.023224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:52.613 [2024-07-26 20:50:41.023236] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.613 [2024-07-26 20:50:41.023254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.023267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.023280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.613 [2024-07-26 20:50:41.023294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.613 [2024-07-26 20:50:41.023307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.023321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.023334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.023346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.025465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.614 [2024-07-26 20:50:41.025507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.614 [2024-07-26 20:50:41.025536] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.025582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.025616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.025666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.025697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.025730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.025762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.025795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.025826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.027967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.614 [2024-07-26 20:50:41.028008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:52.614 [2024-07-26 20:50:41.028037] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.028095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.028109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.028123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.028135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.028150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.028162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.028175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.028187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.030051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.614 [2024-07-26 20:50:41.030092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:52.614 [2024-07-26 20:50:41.030122] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.030169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.030202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.030235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.030266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.030299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.030331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.030364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.030395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.032431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.614 [2024-07-26 20:50:41.032473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:52.614 [2024-07-26 20:50:41.032502] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.032546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.032585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.032619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.032781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.032815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.032846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.032879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.614 [2024-07-26 20:50:41.032911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:268 cdw0:0 sqhd:3700 p:1 m:1 dnr:0 00:29:52.614 [2024-07-26 20:50:41.052607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:52.614 [2024-07-26 20:50:41.052667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:52.614 [2024-07-26 20:50:41.052700] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.060752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.614 [2024-07-26 20:50:41.060780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:52.614 [2024-07-26 20:50:41.060791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:52.614 [2024-07-26 20:50:41.060832] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.060845] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.060857] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.060870] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.060884] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.060896] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.060908] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.614 [2024-07-26 20:50:41.060987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:52.614 [2024-07-26 20:50:41.060999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:52.614 [2024-07-26 20:50:41.061009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:52.614 [2024-07-26 20:50:41.061022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:52.614 [2024-07-26 20:50:41.063082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:52.614 task offset: 40960 on job bdev=Nvme6n1 fails 00:29:52.614 00:29:52.614 Latency(us) 00:29:52.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.614 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.614 Job: Nvme1n1 ended in about 1.86 seconds with error 00:29:52.614 Verification LBA range: start 0x0 length 0x400 00:29:52.614 Nvme1n1 : 1.86 149.81 9.36 34.37 0.00 345136.42 6160.38 1067030.94 00:29:52.614 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.614 Job: Nvme2n1 ended in about 1.86 seconds with error 00:29:52.614 Verification LBA range: start 0x0 length 0x400 00:29:52.614 Nvme2n1 : 1.86 145.99 9.12 34.35 0.00 349415.79 8493.47 1067030.94 00:29:52.614 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.614 Job: Nvme3n1 ended in about 1.86 seconds with error 00:29:52.614 Verification LBA range: start 0x0 length 0x400 00:29:52.614 Nvme3n1 : 1.86 154.52 9.66 34.34 0.00 330972.59 11586.76 1067030.94 00:29:52.614 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.614 Job: Nvme4n1 ended in about 1.86 seconds with error 00:29:52.614 Verification LBA range: start 0x0 length 0x400 00:29:52.614 Nvme4n1 : 1.86 149.63 9.35 34.32 0.00 337003.80 4456.45 1067030.94 00:29:52.614 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.614 Job: Nvme5n1 ended in about 1.87 seconds with error 00:29:52.614 Verification LBA range: start 0x0 length 0x400 00:29:52.614 Nvme5n1 : 1.87 140.45 8.78 34.31 0.00 351795.32 26004.68 1067030.94 00:29:52.614 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.614 Job: Nvme6n1 ended in about 1.87 seconds with error 00:29:52.614 Verification LBA range: start 0x0 length 0x400 00:29:52.614 Nvme6n1 : 1.87 145.75 9.11 34.29 0.00 338325.54 28730.98 1060320.05 00:29:52.614 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.615 Job: Nvme7n1 ended in about 1.87 seconds with error 00:29:52.615 Verification LBA range: start 0x0 length 0x400 00:29:52.615 Nvme7n1 : 1.87 154.26 9.64 34.28 0.00 320456.74 32296.14 1060320.05 00:29:52.615 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.615 Job: Nvme8n1 ended in about 1.87 seconds with error 00:29:52.615 Verification LBA range: start 0x0 length 0x400 00:29:52.615 Nvme8n1 : 1.87 149.92 9.37 34.27 0.00 325113.73 40055.60 1060320.05 00:29:52.615 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.615 Job: Nvme9n1 ended in about 1.87 seconds with error 00:29:52.615 Verification LBA range: start 0x0 length 0x400 00:29:52.615 Nvme9n1 : 1.87 137.01 8.56 34.25 0.00 347098.32 42781.90 1120718.03 00:29:52.615 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.615 Job: Nvme10n1 ended in about 1.87 seconds with error 00:29:52.615 Verification LBA range: start 0x0 length 0x400 00:29:52.615 Nvme10n1 : 1.87 68.48 4.28 34.24 0.00 573625.14 51589.94 1107296.26 00:29:52.615 =================================================================================================================== 00:29:52.615 Total : 1395.81 87.24 343.02 0.00 352062.33 4456.45 1120718.03 00:29:52.615 [2024-07-26 20:50:41.082682] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:52.615 [2024-07-26 20:50:41.082702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:52.615 [2024-07-26 20:50:41.082715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:52.615 [2024-07-26 20:50:41.091935] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.091995] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.092023] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:29:52.615 [2024-07-26 20:50:41.092140] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.092175] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.092200] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:29:52.615 [2024-07-26 20:50:41.092304] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.092345] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.092371] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:29:52.615 [2024-07-26 20:50:41.095925] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.095975] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.096001] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:29:52.615 [2024-07-26 20:50:41.096149] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.096184] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.096209] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:29:52.615 [2024-07-26 20:50:41.096337] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.096371] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.096396] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:29:52.615 [2024-07-26 20:50:41.096509] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.096543] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.096571] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:29:52.615 [2024-07-26 20:50:41.097307] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.097325] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.097335] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:29:52.615 [2024-07-26 20:50:41.097422] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.097438] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.097448] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:29:52.615 [2024-07-26 20:50:41.097554] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:52.615 [2024-07-26 20:50:41.097568] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:52.615 [2024-07-26 20:50:41.097578] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:29:52.874 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1248714 00:29:52.874 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:52.874 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:52.875 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:53.133 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:53.133 rmmod nvme_rdma 00:29:53.133 rmmod nvme_fabrics 00:29:53.133 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 1248714 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:53.134 00:29:53.134 real 0m5.176s 00:29:53.134 user 0m17.410s 00:29:53.134 sys 0m1.385s 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:53.134 ************************************ 00:29:53.134 END TEST nvmf_shutdown_tc3 00:29:53.134 ************************************ 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:53.134 00:29:53.134 real 0m26.153s 00:29:53.134 user 1m10.926s 00:29:53.134 sys 0m10.404s 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:53.134 ************************************ 00:29:53.134 END TEST nvmf_shutdown 00:29:53.134 ************************************ 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:29:53.134 00:29:53.134 real 17m21.145s 00:29:53.134 user 50m15.971s 00:29:53.134 sys 3m28.218s 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.134 20:50:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:53.134 ************************************ 00:29:53.134 END TEST nvmf_target_extra 00:29:53.134 ************************************ 00:29:53.134 20:50:41 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:29:53.134 20:50:41 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:53.134 20:50:41 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.134 20:50:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:53.134 ************************************ 00:29:53.134 START TEST nvmf_host 00:29:53.134 ************************************ 00:29:53.134 20:50:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:29:53.393 * Looking for test storage... 00:29:53.394 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.394 ************************************ 00:29:53.394 START TEST nvmf_multicontroller 00:29:53.394 ************************************ 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:29:53.394 * Looking for test storage... 00:29:53.394 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.394 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:29:53.654 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:29:53.654 00:29:53.654 real 0m0.143s 00:29:53.654 user 0m0.056s 00:29:53.654 sys 0m0.097s 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:53.654 ************************************ 00:29:53.654 END TEST nvmf_multicontroller 00:29:53.654 ************************************ 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.654 20:50:41 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.654 ************************************ 00:29:53.654 START TEST nvmf_aer 00:29:53.654 ************************************ 00:29:53.654 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:29:53.654 * Looking for test storage... 00:29:53.654 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:53.654 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.654 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:53.654 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:53.655 20:50:42 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:01.832 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:01.833 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:01.833 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:01.833 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:01.833 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:01.833 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:01.833 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:01.833 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:01.833 altname enp217s0f0np0 00:30:01.833 altname ens818f0np0 00:30:01.834 inet 192.168.100.8/24 scope global mlx_0_0 00:30:01.834 valid_lft forever preferred_lft forever 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:01.834 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:01.834 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:01.834 altname enp217s0f1np1 00:30:01.834 altname ens818f1np1 00:30:01.834 inet 192.168.100.9/24 scope global mlx_0_1 00:30:01.834 valid_lft forever preferred_lft forever 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:01.834 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:02.093 192.168.100.9' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:02.093 192.168.100.9' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:02.093 192.168.100.9' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1253458 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1253458 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1253458 ']' 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:02.093 20:50:50 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:02.093 [2024-07-26 20:50:50.527645] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:02.093 [2024-07-26 20:50:50.527699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.093 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.093 [2024-07-26 20:50:50.614505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.352 [2024-07-26 20:50:50.656620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.352 [2024-07-26 20:50:50.656672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.352 [2024-07-26 20:50:50.656682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.352 [2024-07-26 20:50:50.656691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.352 [2024-07-26 20:50:50.656698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.352 [2024-07-26 20:50:50.656742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.352 [2024-07-26 20:50:50.656838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.352 [2024-07-26 20:50:50.656857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.352 [2024-07-26 20:50:50.656858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.917 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:02.917 [2024-07-26 20:50:51.415671] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6fdea0/0x702390) succeed. 00:30:02.917 [2024-07-26 20:50:51.424897] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6ff4e0/0x743a20) succeed. 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.174 Malloc0 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.174 [2024-07-26 20:50:51.589017] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.174 [ 00:30:03.174 { 00:30:03.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:03.174 "subtype": "Discovery", 00:30:03.174 "listen_addresses": [], 00:30:03.174 "allow_any_host": true, 00:30:03.174 "hosts": [] 00:30:03.174 }, 00:30:03.174 { 00:30:03.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.174 "subtype": "NVMe", 00:30:03.174 "listen_addresses": [ 00:30:03.174 { 00:30:03.174 "trtype": "RDMA", 00:30:03.174 "adrfam": "IPv4", 00:30:03.174 "traddr": "192.168.100.8", 00:30:03.174 "trsvcid": "4420" 00:30:03.174 } 00:30:03.174 ], 00:30:03.174 "allow_any_host": true, 00:30:03.174 "hosts": [], 00:30:03.174 "serial_number": "SPDK00000000000001", 00:30:03.174 "model_number": "SPDK bdev Controller", 00:30:03.174 "max_namespaces": 2, 00:30:03.174 "min_cntlid": 1, 00:30:03.174 "max_cntlid": 65519, 00:30:03.174 "namespaces": [ 00:30:03.174 { 00:30:03.174 "nsid": 1, 00:30:03.174 "bdev_name": "Malloc0", 00:30:03.174 "name": "Malloc0", 00:30:03.174 "nguid": "D97DC9882D8B429098F2C6B567E57EBE", 00:30:03.174 "uuid": "d97dc988-2d8b-4290-98f2-c6b567e57ebe" 00:30:03.174 } 00:30:03.174 ] 00:30:03.174 } 00:30:03.174 ] 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1253743 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:03.174 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:03.174 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:03.431 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:03.431 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:03.431 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:03.431 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:03.431 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.431 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.431 Malloc1 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.432 [ 00:30:03.432 { 00:30:03.432 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:03.432 "subtype": "Discovery", 00:30:03.432 "listen_addresses": [], 00:30:03.432 "allow_any_host": true, 00:30:03.432 "hosts": [] 00:30:03.432 }, 00:30:03.432 { 00:30:03.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.432 "subtype": "NVMe", 00:30:03.432 "listen_addresses": [ 00:30:03.432 { 00:30:03.432 "trtype": "RDMA", 00:30:03.432 "adrfam": "IPv4", 00:30:03.432 "traddr": "192.168.100.8", 00:30:03.432 "trsvcid": "4420" 00:30:03.432 } 00:30:03.432 ], 00:30:03.432 "allow_any_host": true, 00:30:03.432 "hosts": [], 00:30:03.432 "serial_number": "SPDK00000000000001", 00:30:03.432 "model_number": "SPDK bdev Controller", 00:30:03.432 "max_namespaces": 2, 00:30:03.432 "min_cntlid": 1, 00:30:03.432 "max_cntlid": 65519, 00:30:03.432 "namespaces": [ 00:30:03.432 { 00:30:03.432 "nsid": 1, 00:30:03.432 "bdev_name": "Malloc0", 00:30:03.432 "name": "Malloc0", 00:30:03.432 "nguid": "D97DC9882D8B429098F2C6B567E57EBE", 00:30:03.432 "uuid": "d97dc988-2d8b-4290-98f2-c6b567e57ebe" 00:30:03.432 }, 00:30:03.432 { 00:30:03.432 "nsid": 2, 00:30:03.432 "bdev_name": "Malloc1", 00:30:03.432 "name": "Malloc1", 00:30:03.432 "nguid": "12EB343F936A46F4AB6F3E2F291F3774", 00:30:03.432 "uuid": "12eb343f-936a-46f4-ab6f-3e2f291f3774" 00:30:03.432 } 00:30:03.432 ] 00:30:03.432 } 00:30:03.432 ] 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1253743 00:30:03.432 Asynchronous Event Request test 00:30:03.432 Attaching to 192.168.100.8 00:30:03.432 Attached to 192.168.100.8 00:30:03.432 Registering asynchronous event callbacks... 00:30:03.432 Starting namespace attribute notice tests for all controllers... 00:30:03.432 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:03.432 aer_cb - Changed Namespace 00:30:03.432 Cleaning up... 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.432 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.690 20:50:51 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:03.690 rmmod nvme_rdma 00:30:03.690 rmmod nvme_fabrics 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1253458 ']' 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1253458 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1253458 ']' 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1253458 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1253458 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1253458' 00:30:03.690 killing process with pid 1253458 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1253458 00:30:03.690 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1253458 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:03.948 00:30:03.948 real 0m10.308s 00:30:03.948 user 0m8.934s 00:30:03.948 sys 0m6.943s 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:03.948 ************************************ 00:30:03.948 END TEST nvmf_aer 00:30:03.948 ************************************ 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.948 ************************************ 00:30:03.948 START TEST nvmf_async_init 00:30:03.948 ************************************ 00:30:03.948 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:04.207 * Looking for test storage... 00:30:04.207 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.207 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d9b06a0b68374df29173db6ced33150f 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:04.208 20:50:52 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:12.320 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:12.320 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:12.320 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:12.320 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.320 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:12.321 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:12.321 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:12.321 altname enp217s0f0np0 00:30:12.321 altname ens818f0np0 00:30:12.321 inet 192.168.100.8/24 scope global mlx_0_0 00:30:12.321 valid_lft forever preferred_lft forever 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:12.321 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:12.321 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:12.321 altname enp217s0f1np1 00:30:12.321 altname ens818f1np1 00:30:12.321 inet 192.168.100.9/24 scope global mlx_0_1 00:30:12.321 valid_lft forever preferred_lft forever 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:12.321 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:12.581 192.168.100.9' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:12.581 192.168.100.9' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:12.581 192.168.100.9' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1257973 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1257973 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1257973 ']' 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:12.581 20:51:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:12.581 [2024-07-26 20:51:00.988051] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:12.581 [2024-07-26 20:51:00.988109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.581 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.581 [2024-07-26 20:51:01.076558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.581 [2024-07-26 20:51:01.114317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.581 [2024-07-26 20:51:01.114363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.581 [2024-07-26 20:51:01.114372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.581 [2024-07-26 20:51:01.114381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.581 [2024-07-26 20:51:01.114388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.581 [2024-07-26 20:51:01.114412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.519 [2024-07-26 20:51:01.864445] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20c8c80/0x20cd170) succeed. 00:30:13.519 [2024-07-26 20:51:01.873539] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20ca180/0x210e800) succeed. 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.519 null0 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.519 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d9b06a0b68374df29173db6ced33150f 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.520 [2024-07-26 20:51:01.951884] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.520 20:51:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.520 nvme0n1 00:30:13.520 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.520 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:13.520 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.520 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.520 [ 00:30:13.520 { 00:30:13.520 "name": "nvme0n1", 00:30:13.520 "aliases": [ 00:30:13.520 "d9b06a0b-6837-4df2-9173-db6ced33150f" 00:30:13.520 ], 00:30:13.520 "product_name": "NVMe disk", 00:30:13.520 "block_size": 512, 00:30:13.520 "num_blocks": 2097152, 00:30:13.520 "uuid": "d9b06a0b-6837-4df2-9173-db6ced33150f", 00:30:13.520 "assigned_rate_limits": { 00:30:13.520 "rw_ios_per_sec": 0, 00:30:13.520 "rw_mbytes_per_sec": 0, 00:30:13.520 "r_mbytes_per_sec": 0, 00:30:13.520 "w_mbytes_per_sec": 0 00:30:13.520 }, 00:30:13.520 "claimed": false, 00:30:13.520 "zoned": false, 00:30:13.520 "supported_io_types": { 00:30:13.520 "read": true, 00:30:13.520 "write": true, 00:30:13.520 "unmap": false, 00:30:13.520 "flush": true, 00:30:13.520 "reset": true, 00:30:13.520 "nvme_admin": true, 00:30:13.520 "nvme_io": true, 00:30:13.520 "nvme_io_md": false, 00:30:13.520 "write_zeroes": true, 00:30:13.520 "zcopy": false, 00:30:13.520 "get_zone_info": false, 00:30:13.520 "zone_management": false, 00:30:13.520 "zone_append": false, 00:30:13.520 "compare": true, 00:30:13.520 "compare_and_write": true, 00:30:13.520 "abort": true, 00:30:13.520 "seek_hole": false, 00:30:13.520 "seek_data": false, 00:30:13.520 "copy": true, 00:30:13.520 "nvme_iov_md": false 00:30:13.520 }, 00:30:13.520 "memory_domains": [ 00:30:13.520 { 00:30:13.520 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:13.520 "dma_device_type": 0 00:30:13.520 } 00:30:13.520 ], 00:30:13.520 "driver_specific": { 00:30:13.520 "nvme": [ 00:30:13.520 { 00:30:13.520 "trid": { 00:30:13.520 "trtype": "RDMA", 00:30:13.520 "adrfam": "IPv4", 00:30:13.520 "traddr": "192.168.100.8", 00:30:13.520 "trsvcid": "4420", 00:30:13.520 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:13.520 }, 00:30:13.520 "ctrlr_data": { 00:30:13.520 "cntlid": 1, 00:30:13.520 "vendor_id": "0x8086", 00:30:13.520 "model_number": "SPDK bdev Controller", 00:30:13.520 "serial_number": "00000000000000000000", 00:30:13.520 "firmware_revision": "24.09", 00:30:13.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.520 "oacs": { 00:30:13.520 "security": 0, 00:30:13.520 "format": 0, 00:30:13.520 "firmware": 0, 00:30:13.520 "ns_manage": 0 00:30:13.520 }, 00:30:13.520 "multi_ctrlr": true, 00:30:13.520 "ana_reporting": false 00:30:13.520 }, 00:30:13.520 "vs": { 00:30:13.520 "nvme_version": "1.3" 00:30:13.520 }, 00:30:13.520 "ns_data": { 00:30:13.520 "id": 1, 00:30:13.520 "can_share": true 00:30:13.520 } 00:30:13.520 } 00:30:13.520 ], 00:30:13.520 "mp_policy": "active_passive" 00:30:13.520 } 00:30:13.520 } 00:30:13.520 ] 00:30:13.520 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.520 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:13.520 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.520 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.520 [2024-07-26 20:51:02.066075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:13.780 [2024-07-26 20:51:02.084538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:13.780 [2024-07-26 20:51:02.105937] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.780 [ 00:30:13.780 { 00:30:13.780 "name": "nvme0n1", 00:30:13.780 "aliases": [ 00:30:13.780 "d9b06a0b-6837-4df2-9173-db6ced33150f" 00:30:13.780 ], 00:30:13.780 "product_name": "NVMe disk", 00:30:13.780 "block_size": 512, 00:30:13.780 "num_blocks": 2097152, 00:30:13.780 "uuid": "d9b06a0b-6837-4df2-9173-db6ced33150f", 00:30:13.780 "assigned_rate_limits": { 00:30:13.780 "rw_ios_per_sec": 0, 00:30:13.780 "rw_mbytes_per_sec": 0, 00:30:13.780 "r_mbytes_per_sec": 0, 00:30:13.780 "w_mbytes_per_sec": 0 00:30:13.780 }, 00:30:13.780 "claimed": false, 00:30:13.780 "zoned": false, 00:30:13.780 "supported_io_types": { 00:30:13.780 "read": true, 00:30:13.780 "write": true, 00:30:13.780 "unmap": false, 00:30:13.780 "flush": true, 00:30:13.780 "reset": true, 00:30:13.780 "nvme_admin": true, 00:30:13.780 "nvme_io": true, 00:30:13.780 "nvme_io_md": false, 00:30:13.780 "write_zeroes": true, 00:30:13.780 "zcopy": false, 00:30:13.780 "get_zone_info": false, 00:30:13.780 "zone_management": false, 00:30:13.780 "zone_append": false, 00:30:13.780 "compare": true, 00:30:13.780 "compare_and_write": true, 00:30:13.780 "abort": true, 00:30:13.780 "seek_hole": false, 00:30:13.780 "seek_data": false, 00:30:13.780 "copy": true, 00:30:13.780 "nvme_iov_md": false 00:30:13.780 }, 00:30:13.780 "memory_domains": [ 00:30:13.780 { 00:30:13.780 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:13.780 "dma_device_type": 0 00:30:13.780 } 00:30:13.780 ], 00:30:13.780 "driver_specific": { 00:30:13.780 "nvme": [ 00:30:13.780 { 00:30:13.780 "trid": { 00:30:13.780 "trtype": "RDMA", 00:30:13.780 "adrfam": "IPv4", 00:30:13.780 "traddr": "192.168.100.8", 00:30:13.780 "trsvcid": "4420", 00:30:13.780 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:13.780 }, 00:30:13.780 "ctrlr_data": { 00:30:13.780 "cntlid": 2, 00:30:13.780 "vendor_id": "0x8086", 00:30:13.780 "model_number": "SPDK bdev Controller", 00:30:13.780 "serial_number": "00000000000000000000", 00:30:13.780 "firmware_revision": "24.09", 00:30:13.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.780 "oacs": { 00:30:13.780 "security": 0, 00:30:13.780 "format": 0, 00:30:13.780 "firmware": 0, 00:30:13.780 "ns_manage": 0 00:30:13.780 }, 00:30:13.780 "multi_ctrlr": true, 00:30:13.780 "ana_reporting": false 00:30:13.780 }, 00:30:13.780 "vs": { 00:30:13.780 "nvme_version": "1.3" 00:30:13.780 }, 00:30:13.780 "ns_data": { 00:30:13.780 "id": 1, 00:30:13.780 "can_share": true 00:30:13.780 } 00:30:13.780 } 00:30:13.780 ], 00:30:13.780 "mp_policy": "active_passive" 00:30:13.780 } 00:30:13.780 } 00:30:13.780 ] 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:13.780 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Hmbwx6ZrtP 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Hmbwx6ZrtP 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 [2024-07-26 20:51:02.172861] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hmbwx6ZrtP 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hmbwx6ZrtP 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 [2024-07-26 20:51:02.188894] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:13.781 nvme0n1 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 [ 00:30:13.781 { 00:30:13.781 "name": "nvme0n1", 00:30:13.781 "aliases": [ 00:30:13.781 "d9b06a0b-6837-4df2-9173-db6ced33150f" 00:30:13.781 ], 00:30:13.781 "product_name": "NVMe disk", 00:30:13.781 "block_size": 512, 00:30:13.781 "num_blocks": 2097152, 00:30:13.781 "uuid": "d9b06a0b-6837-4df2-9173-db6ced33150f", 00:30:13.781 "assigned_rate_limits": { 00:30:13.781 "rw_ios_per_sec": 0, 00:30:13.781 "rw_mbytes_per_sec": 0, 00:30:13.781 "r_mbytes_per_sec": 0, 00:30:13.781 "w_mbytes_per_sec": 0 00:30:13.781 }, 00:30:13.781 "claimed": false, 00:30:13.781 "zoned": false, 00:30:13.781 "supported_io_types": { 00:30:13.781 "read": true, 00:30:13.781 "write": true, 00:30:13.781 "unmap": false, 00:30:13.781 "flush": true, 00:30:13.781 "reset": true, 00:30:13.781 "nvme_admin": true, 00:30:13.781 "nvme_io": true, 00:30:13.781 "nvme_io_md": false, 00:30:13.781 "write_zeroes": true, 00:30:13.781 "zcopy": false, 00:30:13.781 "get_zone_info": false, 00:30:13.781 "zone_management": false, 00:30:13.781 "zone_append": false, 00:30:13.781 "compare": true, 00:30:13.781 "compare_and_write": true, 00:30:13.781 "abort": true, 00:30:13.781 "seek_hole": false, 00:30:13.781 "seek_data": false, 00:30:13.781 "copy": true, 00:30:13.781 "nvme_iov_md": false 00:30:13.781 }, 00:30:13.781 "memory_domains": [ 00:30:13.781 { 00:30:13.781 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:13.781 "dma_device_type": 0 00:30:13.781 } 00:30:13.781 ], 00:30:13.781 "driver_specific": { 00:30:13.781 "nvme": [ 00:30:13.781 { 00:30:13.781 "trid": { 00:30:13.781 "trtype": "RDMA", 00:30:13.781 "adrfam": "IPv4", 00:30:13.781 "traddr": "192.168.100.8", 00:30:13.781 "trsvcid": "4421", 00:30:13.781 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:13.781 }, 00:30:13.781 "ctrlr_data": { 00:30:13.781 "cntlid": 3, 00:30:13.781 "vendor_id": "0x8086", 00:30:13.781 "model_number": "SPDK bdev Controller", 00:30:13.781 "serial_number": "00000000000000000000", 00:30:13.781 "firmware_revision": "24.09", 00:30:13.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.781 "oacs": { 00:30:13.781 "security": 0, 00:30:13.781 "format": 0, 00:30:13.781 "firmware": 0, 00:30:13.781 "ns_manage": 0 00:30:13.781 }, 00:30:13.781 "multi_ctrlr": true, 00:30:13.781 "ana_reporting": false 00:30:13.781 }, 00:30:13.781 "vs": { 00:30:13.781 "nvme_version": "1.3" 00:30:13.781 }, 00:30:13.781 "ns_data": { 00:30:13.781 "id": 1, 00:30:13.781 "can_share": true 00:30:13.781 } 00:30:13.781 } 00:30:13.781 ], 00:30:13.781 "mp_policy": "active_passive" 00:30:13.781 } 00:30:13.781 } 00:30:13.781 ] 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Hmbwx6ZrtP 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:13.781 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:13.781 rmmod nvme_rdma 00:30:13.781 rmmod nvme_fabrics 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1257973 ']' 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1257973 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1257973 ']' 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1257973 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1257973 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1257973' 00:30:14.041 killing process with pid 1257973 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1257973 00:30:14.041 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1257973 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:14.301 00:30:14.301 real 0m10.179s 00:30:14.301 user 0m4.038s 00:30:14.301 sys 0m6.870s 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:14.301 ************************************ 00:30:14.301 END TEST nvmf_async_init 00:30:14.301 ************************************ 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.301 ************************************ 00:30:14.301 START TEST dma 00:30:14.301 ************************************ 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:14.301 * Looking for test storage... 00:30:14.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:30:14.301 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:30:14.302 20:51:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # net_devs=() 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # e810=() 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # local -ga e810 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # x722=() 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # local -ga x722 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # mlx=() 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:22.421 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:22.421 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:22.421 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:22.421 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # uname 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:22.421 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:22.422 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:22.422 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:22.422 altname enp217s0f0np0 00:30:22.422 altname ens818f0np0 00:30:22.422 inet 192.168.100.8/24 scope global mlx_0_0 00:30:22.422 valid_lft forever preferred_lft forever 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:22.422 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:22.422 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:22.422 altname enp217s0f1np1 00:30:22.422 altname ens818f1np1 00:30:22.422 inet 192.168.100.9/24 scope global mlx_0_1 00:30:22.422 valid_lft forever preferred_lft forever 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # return 0 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:22.422 192.168.100.9' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:22.422 192.168.100.9' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # head -n 1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:22.422 192.168.100.9' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # tail -n +2 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # head -n 1 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # nvmfpid=1262631 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # waitforlisten 1262631 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 1262631 ']' 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:22.422 20:51:10 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:22.422 [2024-07-26 20:51:10.871519] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:22.422 [2024-07-26 20:51:10.871575] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.422 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.422 [2024-07-26 20:51:10.956476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:22.682 [2024-07-26 20:51:10.996615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.682 [2024-07-26 20:51:10.996658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.682 [2024-07-26 20:51:10.996668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.682 [2024-07-26 20:51:10.996678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.682 [2024-07-26 20:51:10.996685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.682 [2024-07-26 20:51:10.996915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.682 [2024-07-26 20:51:10.996920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.251 [2024-07-26 20:51:11.720384] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa936e0/0xa97bd0) succeed. 00:30:23.251 [2024-07-26 20:51:11.729655] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa94be0/0xad9260) succeed. 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.251 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.510 Malloc0 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.510 20:51:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:23.511 [2024-07-26 20:51:11.884483] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # config=() 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # local subsystem config 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.511 { 00:30:23.511 "params": { 00:30:23.511 "name": "Nvme$subsystem", 00:30:23.511 "trtype": "$TEST_TRANSPORT", 00:30:23.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.511 "adrfam": "ipv4", 00:30:23.511 "trsvcid": "$NVMF_PORT", 00:30:23.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.511 "hdgst": ${hdgst:-false}, 00:30:23.511 "ddgst": ${ddgst:-false} 00:30:23.511 }, 00:30:23.511 "method": "bdev_nvme_attach_controller" 00:30:23.511 } 00:30:23.511 EOF 00:30:23.511 )") 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # cat 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # jq . 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@557 -- # IFS=, 00:30:23.511 20:51:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:23.511 "params": { 00:30:23.511 "name": "Nvme0", 00:30:23.511 "trtype": "rdma", 00:30:23.511 "traddr": "192.168.100.8", 00:30:23.511 "adrfam": "ipv4", 00:30:23.511 "trsvcid": "4420", 00:30:23.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:23.511 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:23.511 "hdgst": false, 00:30:23.511 "ddgst": false 00:30:23.511 }, 00:30:23.511 "method": "bdev_nvme_attach_controller" 00:30:23.511 }' 00:30:23.511 [2024-07-26 20:51:11.936051] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:23.511 [2024-07-26 20:51:11.936101] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262750 ] 00:30:23.511 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.511 [2024-07-26 20:51:12.019667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:23.511 [2024-07-26 20:51:12.058891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.511 [2024-07-26 20:51:12.058894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.116 bdev Nvme0n1 reports 1 memory domains 00:30:30.116 bdev Nvme0n1 supports RDMA memory domain 00:30:30.116 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:30.116 ========================================================================== 00:30:30.116 Latency [us] 00:30:30.116 IOPS MiB/s Average min max 00:30:30.116 Core 2: 21892.70 85.52 730.08 251.04 8342.02 00:30:30.116 Core 3: 22035.67 86.08 725.34 238.25 8430.53 00:30:30.116 ========================================================================== 00:30:30.116 Total : 43928.37 171.60 727.70 238.25 8430.53 00:30:30.116 00:30:30.116 Total operations: 219695, translate 219695 pull_push 0 memzero 0 00:30:30.116 20:51:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:30:30.116 20:51:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:30:30.116 20:51:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:30:30.116 [2024-07-26 20:51:17.485168] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:30.116 [2024-07-26 20:51:17.485229] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263725 ] 00:30:30.116 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.116 [2024-07-26 20:51:17.567031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:30.116 [2024-07-26 20:51:17.603582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.116 [2024-07-26 20:51:17.603585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.389 bdev Malloc0 reports 2 memory domains 00:30:35.389 bdev Malloc0 doesn't support RDMA memory domain 00:30:35.389 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:35.389 ========================================================================== 00:30:35.389 Latency [us] 00:30:35.389 IOPS MiB/s Average min max 00:30:35.390 Core 2: 14480.00 56.56 1104.20 379.34 1887.33 00:30:35.390 Core 3: 14598.76 57.03 1095.20 404.36 2407.34 00:30:35.390 ========================================================================== 00:30:35.390 Total : 29078.76 113.59 1099.68 379.34 2407.34 00:30:35.390 00:30:35.390 Total operations: 145450, translate 0 pull_push 581800 memzero 0 00:30:35.390 20:51:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:30:35.390 20:51:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:30:35.390 20:51:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:30:35.390 20:51:22 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:30:35.390 Ignoring -M option 00:30:35.390 [2024-07-26 20:51:22.945126] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:35.390 [2024-07-26 20:51:22.945184] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264531 ] 00:30:35.390 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.390 [2024-07-26 20:51:23.027901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:35.390 [2024-07-26 20:51:23.067037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.390 [2024-07-26 20:51:23.067040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.657 bdev c6baf54c-6200-4cc4-ba91-94fc81ad7ccf reports 1 memory domains 00:30:40.657 bdev c6baf54c-6200-4cc4-ba91-94fc81ad7ccf supports RDMA memory domain 00:30:40.657 Initialization complete, running randread IO for 5 sec on 2 cores 00:30:40.657 ========================================================================== 00:30:40.657 Latency [us] 00:30:40.657 IOPS MiB/s Average min max 00:30:40.657 Core 2: 73497.34 287.10 216.91 79.98 1567.17 00:30:40.657 Core 3: 77538.48 302.88 205.60 82.04 1534.26 00:30:40.657 ========================================================================== 00:30:40.657 Total : 151035.82 589.98 211.11 79.98 1567.17 00:30:40.657 00:30:40.657 Total operations: 755264, translate 0 pull_push 0 memzero 755264 00:30:40.657 20:51:28 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:30:40.657 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.657 [2024-07-26 20:51:28.608980] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:42.561 Initializing NVMe Controllers 00:30:42.561 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:30:42.561 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:42.561 Initialization complete. Launching workers. 00:30:42.561 ======================================================== 00:30:42.561 Latency(us) 00:30:42.561 Device Information : IOPS MiB/s Average min max 00:30:42.561 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.66 7.91 7964.43 3990.28 10973.85 00:30:42.561 ======================================================== 00:30:42.561 Total : 2024.66 7.91 7964.43 3990.28 10973.85 00:30:42.561 00:30:42.561 20:51:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:30:42.561 20:51:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:30:42.561 20:51:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:30:42.561 20:51:30 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:30:42.561 [2024-07-26 20:51:30.949494] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:42.561 [2024-07-26 20:51:30.949560] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265862 ] 00:30:42.561 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.561 [2024-07-26 20:51:31.029828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:42.561 [2024-07-26 20:51:31.069307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.561 [2024-07-26 20:51:31.069309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.126 bdev a74b0e70-4302-4d7f-b6e4-4d7e5179fbbb reports 1 memory domains 00:30:49.126 bdev a74b0e70-4302-4d7f-b6e4-4d7e5179fbbb supports RDMA memory domain 00:30:49.126 Initialization complete, running randrw IO for 5 sec on 2 cores 00:30:49.126 ========================================================================== 00:30:49.126 Latency [us] 00:30:49.126 IOPS MiB/s Average min max 00:30:49.126 Core 2: 19356.16 75.61 825.95 33.51 12721.91 00:30:49.126 Core 3: 19606.54 76.59 815.39 18.49 12942.70 00:30:49.126 ========================================================================== 00:30:49.126 Total : 38962.70 152.20 820.64 18.49 12942.70 00:30:49.126 00:30:49.126 Total operations: 194830, translate 194721 pull_push 0 memzero 109 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # sync 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@120 -- # set +e 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.126 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:49.126 rmmod nvme_rdma 00:30:49.126 rmmod nvme_fabrics 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set -e 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # return 0 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # '[' -n 1262631 ']' 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@490 -- # killprocess 1262631 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 1262631 ']' 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 1262631 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1262631 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1262631' 00:30:49.127 killing process with pid 1262631 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 1262631 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 1262631 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:49.127 00:30:49.127 real 0m34.242s 00:30:49.127 user 1m36.643s 00:30:49.127 sys 0m7.409s 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:49.127 ************************************ 00:30:49.127 END TEST dma 00:30:49.127 ************************************ 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.127 ************************************ 00:30:49.127 START TEST nvmf_identify 00:30:49.127 ************************************ 00:30:49.127 20:51:36 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:30:49.127 * Looking for test storage... 00:30:49.127 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:49.127 20:51:37 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:57.243 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:30:57.244 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:30:57.244 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:30:57.244 Found net devices under 0000:d9:00.0: mlx_0_0 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:30:57.244 Found net devices under 0000:d9:00.1: mlx_0_1 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:57.244 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:57.244 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:30:57.244 altname enp217s0f0np0 00:30:57.244 altname ens818f0np0 00:30:57.244 inet 192.168.100.8/24 scope global mlx_0_0 00:30:57.244 valid_lft forever preferred_lft forever 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:57.244 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:57.244 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:57.245 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:30:57.245 altname enp217s0f1np1 00:30:57.245 altname ens818f1np1 00:30:57.245 inet 192.168.100.9/24 scope global mlx_0_1 00:30:57.245 valid_lft forever preferred_lft forever 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:57.245 192.168.100.9' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:57.245 192.168.100.9' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:57.245 192.168.100.9' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1270815 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1270815 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1270815 ']' 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:57.245 20:51:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:57.245 [2024-07-26 20:51:45.507549] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:57.245 [2024-07-26 20:51:45.507601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.245 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.245 [2024-07-26 20:51:45.592109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:57.245 [2024-07-26 20:51:45.633559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.245 [2024-07-26 20:51:45.633596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.245 [2024-07-26 20:51:45.633606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.245 [2024-07-26 20:51:45.633615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.245 [2024-07-26 20:51:45.633622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.245 [2024-07-26 20:51:45.633673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.245 [2024-07-26 20:51:45.633767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.245 [2024-07-26 20:51:45.633863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.245 [2024-07-26 20:51:45.633865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.812 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:57.812 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:30:57.812 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:57.812 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.812 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:57.812 [2024-07-26 20:51:46.354220] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x73eea0/0x743390) succeed. 00:30:57.812 [2024-07-26 20:51:46.363472] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7404e0/0x784a20) succeed. 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.070 Malloc0 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.070 [2024-07-26 20:51:46.573649] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.070 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.071 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:58.071 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.071 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.071 [ 00:30:58.071 { 00:30:58.071 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:58.071 "subtype": "Discovery", 00:30:58.071 "listen_addresses": [ 00:30:58.071 { 00:30:58.071 "trtype": "RDMA", 00:30:58.071 "adrfam": "IPv4", 00:30:58.071 "traddr": "192.168.100.8", 00:30:58.071 "trsvcid": "4420" 00:30:58.071 } 00:30:58.071 ], 00:30:58.071 "allow_any_host": true, 00:30:58.071 "hosts": [] 00:30:58.071 }, 00:30:58.071 { 00:30:58.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.071 "subtype": "NVMe", 00:30:58.071 "listen_addresses": [ 00:30:58.071 { 00:30:58.071 "trtype": "RDMA", 00:30:58.071 "adrfam": "IPv4", 00:30:58.071 "traddr": "192.168.100.8", 00:30:58.071 "trsvcid": "4420" 00:30:58.071 } 00:30:58.071 ], 00:30:58.071 "allow_any_host": true, 00:30:58.071 "hosts": [], 00:30:58.071 "serial_number": "SPDK00000000000001", 00:30:58.071 "model_number": "SPDK bdev Controller", 00:30:58.071 "max_namespaces": 32, 00:30:58.071 "min_cntlid": 1, 00:30:58.071 "max_cntlid": 65519, 00:30:58.071 "namespaces": [ 00:30:58.071 { 00:30:58.071 "nsid": 1, 00:30:58.071 "bdev_name": "Malloc0", 00:30:58.071 "name": "Malloc0", 00:30:58.071 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:58.071 "eui64": "ABCDEF0123456789", 00:30:58.071 "uuid": "04375ff7-354d-4451-bc38-1e04c4f7e784" 00:30:58.071 } 00:30:58.071 ] 00:30:58.071 } 00:30:58.071 ] 00:30:58.071 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.071 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:58.366 [2024-07-26 20:51:46.634365] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:58.367 [2024-07-26 20:51:46.634404] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271094 ] 00:30:58.367 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.367 [2024-07-26 20:51:46.684554] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:58.367 [2024-07-26 20:51:46.684635] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:30:58.367 [2024-07-26 20:51:46.684650] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:30:58.367 [2024-07-26 20:51:46.684656] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:30:58.367 [2024-07-26 20:51:46.684684] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:58.367 [2024-07-26 20:51:46.703137] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:30:58.367 [2024-07-26 20:51:46.717255] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:58.367 [2024-07-26 20:51:46.717267] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:30:58.367 [2024-07-26 20:51:46.717274] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717282] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717289] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717296] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717302] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717309] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717316] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717322] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717329] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717335] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717342] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717348] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717355] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717361] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717368] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717375] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717381] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717388] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717394] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717401] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717408] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717414] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717424] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717431] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717437] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717444] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717450] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717457] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717463] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717470] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717476] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717483] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:30:58.367 [2024-07-26 20:51:46.717489] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:58.367 [2024-07-26 20:51:46.717493] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:30:58.367 [2024-07-26 20:51:46.717508] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.717520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182300 00:30:58.367 [2024-07-26 20:51:46.722632] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.367 [2024-07-26 20:51:46.722641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:58.367 [2024-07-26 20:51:46.722649] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722656] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:58.367 [2024-07-26 20:51:46.722664] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:58.367 [2024-07-26 20:51:46.722671] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:58.367 [2024-07-26 20:51:46.722686] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.367 [2024-07-26 20:51:46.722720] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.367 [2024-07-26 20:51:46.722727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:30:58.367 [2024-07-26 20:51:46.722734] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:58.367 [2024-07-26 20:51:46.722740] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722747] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:58.367 [2024-07-26 20:51:46.722755] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.367 [2024-07-26 20:51:46.722780] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.367 [2024-07-26 20:51:46.722787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:30:58.367 [2024-07-26 20:51:46.722794] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:58.367 [2024-07-26 20:51:46.722801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:58.367 [2024-07-26 20:51:46.722816] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.367 [2024-07-26 20:51:46.722845] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.367 [2024-07-26 20:51:46.722851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:58.367 [2024-07-26 20:51:46.722858] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:58.367 [2024-07-26 20:51:46.722864] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722873] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.367 [2024-07-26 20:51:46.722897] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.367 [2024-07-26 20:51:46.722903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:58.367 [2024-07-26 20:51:46.722909] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:58.367 [2024-07-26 20:51:46.722916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:58.367 [2024-07-26 20:51:46.722922] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.722929] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:58.367 [2024-07-26 20:51:46.723036] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:58.367 [2024-07-26 20:51:46.723042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:58.367 [2024-07-26 20:51:46.723054] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.367 [2024-07-26 20:51:46.723062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.367 [2024-07-26 20:51:46.723084] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.367 [2024-07-26 20:51:46.723090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:58.367 [2024-07-26 20:51:46.723097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:58.368 [2024-07-26 20:51:46.723103] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723112] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.368 [2024-07-26 20:51:46.723140] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723152] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:58.368 [2024-07-26 20:51:46.723159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:58.368 [2024-07-26 20:51:46.723165] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723172] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:58.368 [2024-07-26 20:51:46.723181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:58.368 [2024-07-26 20:51:46.723190] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:30:58.368 [2024-07-26 20:51:46.723237] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723252] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:58.368 [2024-07-26 20:51:46.723258] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:58.368 [2024-07-26 20:51:46.723264] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:58.368 [2024-07-26 20:51:46.723273] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:58.368 [2024-07-26 20:51:46.723280] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:58.368 [2024-07-26 20:51:46.723286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:58.368 [2024-07-26 20:51:46.723292] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723300] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:58.368 [2024-07-26 20:51:46.723308] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.368 [2024-07-26 20:51:46.723334] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723349] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.368 [2024-07-26 20:51:46.723364] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.368 [2024-07-26 20:51:46.723380] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.368 [2024-07-26 20:51:46.723395] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.368 [2024-07-26 20:51:46.723409] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:58.368 [2024-07-26 20:51:46.723415] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723423] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:58.368 [2024-07-26 20:51:46.723431] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723439] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.368 [2024-07-26 20:51:46.723455] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723468] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:58.368 [2024-07-26 20:51:46.723474] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:58.368 [2024-07-26 20:51:46.723481] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723490] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:30:58.368 [2024-07-26 20:51:46.723521] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723535] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:58.368 [2024-07-26 20:51:46.723565] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182300 00:30:58.368 [2024-07-26 20:51:46.723582] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.368 [2024-07-26 20:51:46.723603] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723622] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182300 00:30:58.368 [2024-07-26 20:51:46.723641] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723648] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723660] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723667] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723682] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182300 00:30:58.368 [2024-07-26 20:51:46.723696] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182300 00:30:58.368 [2024-07-26 20:51:46.723715] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.368 [2024-07-26 20:51:46.723721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:58.368 [2024-07-26 20:51:46.723732] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182300 00:30:58.368 ===================================================== 00:30:58.368 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:58.368 ===================================================== 00:30:58.368 Controller Capabilities/Features 00:30:58.368 ================================ 00:30:58.368 Vendor ID: 0000 00:30:58.368 Subsystem Vendor ID: 0000 00:30:58.368 Serial Number: .................... 00:30:58.368 Model Number: ........................................ 00:30:58.368 Firmware Version: 24.09 00:30:58.368 Recommended Arb Burst: 0 00:30:58.368 IEEE OUI Identifier: 00 00 00 00:30:58.368 Multi-path I/O 00:30:58.368 May have multiple subsystem ports: No 00:30:58.368 May have multiple controllers: No 00:30:58.368 Associated with SR-IOV VF: No 00:30:58.368 Max Data Transfer Size: 131072 00:30:58.368 Max Number of Namespaces: 0 00:30:58.368 Max Number of I/O Queues: 1024 00:30:58.368 NVMe Specification Version (VS): 1.3 00:30:58.368 NVMe Specification Version (Identify): 1.3 00:30:58.368 Maximum Queue Entries: 128 00:30:58.368 Contiguous Queues Required: Yes 00:30:58.368 Arbitration Mechanisms Supported 00:30:58.368 Weighted Round Robin: Not Supported 00:30:58.368 Vendor Specific: Not Supported 00:30:58.368 Reset Timeout: 15000 ms 00:30:58.368 Doorbell Stride: 4 bytes 00:30:58.368 NVM Subsystem Reset: Not Supported 00:30:58.368 Command Sets Supported 00:30:58.369 NVM Command Set: Supported 00:30:58.369 Boot Partition: Not Supported 00:30:58.369 Memory Page Size Minimum: 4096 bytes 00:30:58.369 Memory Page Size Maximum: 4096 bytes 00:30:58.369 Persistent Memory Region: Not Supported 00:30:58.369 Optional Asynchronous Events Supported 00:30:58.369 Namespace Attribute Notices: Not Supported 00:30:58.369 Firmware Activation Notices: Not Supported 00:30:58.369 ANA Change Notices: Not Supported 00:30:58.369 PLE Aggregate Log Change Notices: Not Supported 00:30:58.369 LBA Status Info Alert Notices: Not Supported 00:30:58.369 EGE Aggregate Log Change Notices: Not Supported 00:30:58.369 Normal NVM Subsystem Shutdown event: Not Supported 00:30:58.369 Zone Descriptor Change Notices: Not Supported 00:30:58.369 Discovery Log Change Notices: Supported 00:30:58.369 Controller Attributes 00:30:58.369 128-bit Host Identifier: Not Supported 00:30:58.369 Non-Operational Permissive Mode: Not Supported 00:30:58.369 NVM Sets: Not Supported 00:30:58.369 Read Recovery Levels: Not Supported 00:30:58.369 Endurance Groups: Not Supported 00:30:58.369 Predictable Latency Mode: Not Supported 00:30:58.369 Traffic Based Keep ALive: Not Supported 00:30:58.369 Namespace Granularity: Not Supported 00:30:58.369 SQ Associations: Not Supported 00:30:58.369 UUID List: Not Supported 00:30:58.369 Multi-Domain Subsystem: Not Supported 00:30:58.369 Fixed Capacity Management: Not Supported 00:30:58.369 Variable Capacity Management: Not Supported 00:30:58.369 Delete Endurance Group: Not Supported 00:30:58.369 Delete NVM Set: Not Supported 00:30:58.369 Extended LBA Formats Supported: Not Supported 00:30:58.369 Flexible Data Placement Supported: Not Supported 00:30:58.369 00:30:58.369 Controller Memory Buffer Support 00:30:58.369 ================================ 00:30:58.369 Supported: No 00:30:58.369 00:30:58.369 Persistent Memory Region Support 00:30:58.369 ================================ 00:30:58.369 Supported: No 00:30:58.369 00:30:58.369 Admin Command Set Attributes 00:30:58.369 ============================ 00:30:58.369 Security Send/Receive: Not Supported 00:30:58.369 Format NVM: Not Supported 00:30:58.369 Firmware Activate/Download: Not Supported 00:30:58.369 Namespace Management: Not Supported 00:30:58.369 Device Self-Test: Not Supported 00:30:58.369 Directives: Not Supported 00:30:58.369 NVMe-MI: Not Supported 00:30:58.369 Virtualization Management: Not Supported 00:30:58.369 Doorbell Buffer Config: Not Supported 00:30:58.369 Get LBA Status Capability: Not Supported 00:30:58.369 Command & Feature Lockdown Capability: Not Supported 00:30:58.369 Abort Command Limit: 1 00:30:58.369 Async Event Request Limit: 4 00:30:58.369 Number of Firmware Slots: N/A 00:30:58.369 Firmware Slot 1 Read-Only: N/A 00:30:58.369 Firmware Activation Without Reset: N/A 00:30:58.369 Multiple Update Detection Support: N/A 00:30:58.369 Firmware Update Granularity: No Information Provided 00:30:58.369 Per-Namespace SMART Log: No 00:30:58.369 Asymmetric Namespace Access Log Page: Not Supported 00:30:58.369 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:58.369 Command Effects Log Page: Not Supported 00:30:58.369 Get Log Page Extended Data: Supported 00:30:58.369 Telemetry Log Pages: Not Supported 00:30:58.369 Persistent Event Log Pages: Not Supported 00:30:58.369 Supported Log Pages Log Page: May Support 00:30:58.369 Commands Supported & Effects Log Page: Not Supported 00:30:58.369 Feature Identifiers & Effects Log Page:May Support 00:30:58.369 NVMe-MI Commands & Effects Log Page: May Support 00:30:58.369 Data Area 4 for Telemetry Log: Not Supported 00:30:58.369 Error Log Page Entries Supported: 128 00:30:58.369 Keep Alive: Not Supported 00:30:58.369 00:30:58.369 NVM Command Set Attributes 00:30:58.369 ========================== 00:30:58.369 Submission Queue Entry Size 00:30:58.369 Max: 1 00:30:58.369 Min: 1 00:30:58.369 Completion Queue Entry Size 00:30:58.369 Max: 1 00:30:58.369 Min: 1 00:30:58.369 Number of Namespaces: 0 00:30:58.369 Compare Command: Not Supported 00:30:58.369 Write Uncorrectable Command: Not Supported 00:30:58.369 Dataset Management Command: Not Supported 00:30:58.369 Write Zeroes Command: Not Supported 00:30:58.369 Set Features Save Field: Not Supported 00:30:58.369 Reservations: Not Supported 00:30:58.369 Timestamp: Not Supported 00:30:58.369 Copy: Not Supported 00:30:58.369 Volatile Write Cache: Not Present 00:30:58.369 Atomic Write Unit (Normal): 1 00:30:58.369 Atomic Write Unit (PFail): 1 00:30:58.369 Atomic Compare & Write Unit: 1 00:30:58.369 Fused Compare & Write: Supported 00:30:58.369 Scatter-Gather List 00:30:58.369 SGL Command Set: Supported 00:30:58.369 SGL Keyed: Supported 00:30:58.369 SGL Bit Bucket Descriptor: Not Supported 00:30:58.369 SGL Metadata Pointer: Not Supported 00:30:58.369 Oversized SGL: Not Supported 00:30:58.369 SGL Metadata Address: Not Supported 00:30:58.369 SGL Offset: Supported 00:30:58.369 Transport SGL Data Block: Not Supported 00:30:58.369 Replay Protected Memory Block: Not Supported 00:30:58.369 00:30:58.369 Firmware Slot Information 00:30:58.369 ========================= 00:30:58.369 Active slot: 0 00:30:58.369 00:30:58.369 00:30:58.369 Error Log 00:30:58.369 ========= 00:30:58.369 00:30:58.369 Active Namespaces 00:30:58.369 ================= 00:30:58.369 Discovery Log Page 00:30:58.369 ================== 00:30:58.369 Generation Counter: 2 00:30:58.369 Number of Records: 2 00:30:58.369 Record Format: 0 00:30:58.369 00:30:58.369 Discovery Log Entry 0 00:30:58.369 ---------------------- 00:30:58.369 Transport Type: 1 (RDMA) 00:30:58.369 Address Family: 1 (IPv4) 00:30:58.369 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:58.369 Entry Flags: 00:30:58.369 Duplicate Returned Information: 1 00:30:58.369 Explicit Persistent Connection Support for Discovery: 1 00:30:58.369 Transport Requirements: 00:30:58.369 Secure Channel: Not Required 00:30:58.369 Port ID: 0 (0x0000) 00:30:58.369 Controller ID: 65535 (0xffff) 00:30:58.369 Admin Max SQ Size: 128 00:30:58.369 Transport Service Identifier: 4420 00:30:58.369 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:58.369 Transport Address: 192.168.100.8 00:30:58.369 Transport Specific Address Subtype - RDMA 00:30:58.369 RDMA QP Service Type: 1 (Reliable Connected) 00:30:58.369 RDMA Provider Type: 1 (No provider specified) 00:30:58.369 RDMA CM Service: 1 (RDMA_CM) 00:30:58.369 Discovery Log Entry 1 00:30:58.369 ---------------------- 00:30:58.369 Transport Type: 1 (RDMA) 00:30:58.369 Address Family: 1 (IPv4) 00:30:58.369 Subsystem Type: 2 (NVM Subsystem) 00:30:58.369 Entry Flags: 00:30:58.369 Duplicate Returned Information: 0 00:30:58.369 Explicit Persistent Connection Support for Discovery: 0 00:30:58.369 Transport Requirements: 00:30:58.369 Secure Channel: Not Required 00:30:58.369 Port ID: 0 (0x0000) 00:30:58.369 Controller ID: 65535 (0xffff) 00:30:58.369 Admin Max SQ Size: [2024-07-26 20:51:46.723803] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:58.369 [2024-07-26 20:51:46.723813] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3195 doesn't match qid 00:30:58.369 [2024-07-26 20:51:46.723828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:ef40 p:0 m:0 dnr:0 00:30:58.369 [2024-07-26 20:51:46.723835] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3195 doesn't match qid 00:30:58.369 [2024-07-26 20:51:46.723844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:ef40 p:0 m:0 dnr:0 00:30:58.369 [2024-07-26 20:51:46.723850] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3195 doesn't match qid 00:30:58.369 [2024-07-26 20:51:46.723859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:ef40 p:0 m:0 dnr:0 00:30:58.369 [2024-07-26 20:51:46.723865] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 3195 doesn't match qid 00:30:58.369 [2024-07-26 20:51:46.723874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:ef40 p:0 m:0 dnr:0 00:30:58.369 [2024-07-26 20:51:46.723883] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182300 00:30:58.369 [2024-07-26 20:51:46.723892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.369 [2024-07-26 20:51:46.723909] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.369 [2024-07-26 20:51:46.723915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:30:58.369 [2024-07-26 20:51:46.723926] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.369 [2024-07-26 20:51:46.723934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.723942] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.723958] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.723964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.723970] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:58.370 [2024-07-26 20:51:46.723976] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:58.370 [2024-07-26 20:51:46.723983] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.723991] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.723999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724018] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724032] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724042] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724070] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724084] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724094] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724126] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724139] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724148] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724176] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724190] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724200] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724227] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724244] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724253] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724279] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724293] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724302] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724330] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724343] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724352] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724379] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724392] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724401] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724425] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724439] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724448] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724474] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724487] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724496] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724528] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724542] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724551] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724577] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724589] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724598] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724628] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724641] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724650] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724678] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724691] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724700] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724725] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724738] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724747] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.370 [2024-07-26 20:51:46.724755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.370 [2024-07-26 20:51:46.724775] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.370 [2024-07-26 20:51:46.724781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:58.370 [2024-07-26 20:51:46.724788] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724797] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.724824] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.724830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.724837] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724846] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.724868] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.724874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.724881] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724890] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.724920] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.724926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.724932] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724942] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.724966] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.724972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.724978] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724988] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.724996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725016] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725028] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725037] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725066] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725078] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725087] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725115] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725127] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725136] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725160] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725173] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725182] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725208] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725220] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725230] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725252] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725264] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725273] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725301] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725314] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725323] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725345] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725358] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725367] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725392] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725404] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725413] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.371 [2024-07-26 20:51:46.725441] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.371 [2024-07-26 20:51:46.725447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:30:58.371 [2024-07-26 20:51:46.725454] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182300 00:30:58.371 [2024-07-26 20:51:46.725463] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725493] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725505] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725514] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725542] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725555] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725564] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725590] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725602] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725612] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725642] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725655] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725664] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725694] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725706] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725715] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725739] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725752] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725761] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725791] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725803] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725812] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725836] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725849] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725858] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725888] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725901] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725910] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725940] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.725953] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725962] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.725971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.725990] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.725996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.726003] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726012] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.726038] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.726044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.726051] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726060] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.726084] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.726090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.726097] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726105] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.726137] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.726143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.726150] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726159] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.726187] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.726193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.726200] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726209] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.726235] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.726241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.726248] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726258] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.726283] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.726289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:58.372 [2024-07-26 20:51:46.726295] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726304] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.372 [2024-07-26 20:51:46.726312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.372 [2024-07-26 20:51:46.726331] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.372 [2024-07-26 20:51:46.726337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:58.373 [2024-07-26 20:51:46.726343] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726352] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.373 [2024-07-26 20:51:46.726378] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.373 [2024-07-26 20:51:46.726384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:58.373 [2024-07-26 20:51:46.726391] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726400] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.373 [2024-07-26 20:51:46.726422] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.373 [2024-07-26 20:51:46.726428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:58.373 [2024-07-26 20:51:46.726435] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726444] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.373 [2024-07-26 20:51:46.726472] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.373 [2024-07-26 20:51:46.726478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:58.373 [2024-07-26 20:51:46.726484] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726494] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.373 [2024-07-26 20:51:46.726521] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.373 [2024-07-26 20:51:46.726527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:58.373 [2024-07-26 20:51:46.726534] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726544] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.373 [2024-07-26 20:51:46.726572] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.373 [2024-07-26 20:51:46.726578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:58.373 [2024-07-26 20:51:46.726585] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726594] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.726602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.373 [2024-07-26 20:51:46.726620] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.373 [2024-07-26 20:51:46.730632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:58.373 [2024-07-26 20:51:46.730640] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.730649] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.730658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.373 [2024-07-26 20:51:46.730682] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.373 [2024-07-26 20:51:46.730688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:30:58.373 [2024-07-26 20:51:46.730695] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.730702] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:30:58.373 128 00:30:58.373 Transport Service Identifier: 4420 00:30:58.373 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:58.373 Transport Address: 192.168.100.8 00:30:58.373 Transport Specific Address Subtype - RDMA 00:30:58.373 RDMA QP Service Type: 1 (Reliable Connected) 00:30:58.373 RDMA Provider Type: 1 (No provider specified) 00:30:58.373 RDMA CM Service: 1 (RDMA_CM) 00:30:58.373 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:58.373 [2024-07-26 20:51:46.803247] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:30:58.373 [2024-07-26 20:51:46.803293] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271107 ] 00:30:58.373 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.373 [2024-07-26 20:51:46.850423] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:58.373 [2024-07-26 20:51:46.850497] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:30:58.373 [2024-07-26 20:51:46.850510] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:30:58.373 [2024-07-26 20:51:46.850517] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:30:58.373 [2024-07-26 20:51:46.850542] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:58.373 [2024-07-26 20:51:46.861165] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:30:58.373 [2024-07-26 20:51:46.871223] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:58.373 [2024-07-26 20:51:46.871233] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:30:58.373 [2024-07-26 20:51:46.871239] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871246] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871253] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871259] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871265] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871272] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871278] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871284] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871290] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871297] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871303] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871309] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871315] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871322] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871328] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871334] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871340] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871347] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871353] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871359] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871366] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871372] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871378] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871384] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871391] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871397] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871403] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871412] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871418] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871424] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871431] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182300 00:30:58.373 [2024-07-26 20:51:46.871436] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:30:58.373 [2024-07-26 20:51:46.871442] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:30:58.373 [2024-07-26 20:51:46.871446] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:30:58.373 [2024-07-26 20:51:46.871458] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.871470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182300 00:30:58.374 [2024-07-26 20:51:46.876629] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.876638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.876645] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876652] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:58.374 [2024-07-26 20:51:46.876659] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:58.374 [2024-07-26 20:51:46.876665] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:58.374 [2024-07-26 20:51:46.876678] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.374 [2024-07-26 20:51:46.876708] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.876714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.876721] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:58.374 [2024-07-26 20:51:46.876727] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876734] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:58.374 [2024-07-26 20:51:46.876741] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.374 [2024-07-26 20:51:46.876765] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.876771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.876778] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:58.374 [2024-07-26 20:51:46.876784] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876791] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:58.374 [2024-07-26 20:51:46.876799] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.374 [2024-07-26 20:51:46.876826] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.876832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.876839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:58.374 [2024-07-26 20:51:46.876845] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876853] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.374 [2024-07-26 20:51:46.876881] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.876886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.876893] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:58.374 [2024-07-26 20:51:46.876899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:58.374 [2024-07-26 20:51:46.876905] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.876912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:58.374 [2024-07-26 20:51:46.877018] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:58.374 [2024-07-26 20:51:46.877023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:58.374 [2024-07-26 20:51:46.877034] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.374 [2024-07-26 20:51:46.877068] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.877074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.877080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:58.374 [2024-07-26 20:51:46.877086] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877095] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.374 [2024-07-26 20:51:46.877118] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.877124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.877130] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:58.374 [2024-07-26 20:51:46.877136] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:58.374 [2024-07-26 20:51:46.877144] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877151] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:58.374 [2024-07-26 20:51:46.877162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:58.374 [2024-07-26 20:51:46.877172] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:30:58.374 [2024-07-26 20:51:46.877216] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.877221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.877230] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:58.374 [2024-07-26 20:51:46.877236] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:58.374 [2024-07-26 20:51:46.877242] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:58.374 [2024-07-26 20:51:46.877249] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:58.374 [2024-07-26 20:51:46.877255] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:58.374 [2024-07-26 20:51:46.877261] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:58.374 [2024-07-26 20:51:46.877267] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877275] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:58.374 [2024-07-26 20:51:46.877282] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.374 [2024-07-26 20:51:46.877316] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.877322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.877330] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.374 [2024-07-26 20:51:46.877344] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.374 [2024-07-26 20:51:46.877359] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.374 [2024-07-26 20:51:46.877373] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.374 [2024-07-26 20:51:46.877386] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:58.374 [2024-07-26 20:51:46.877394] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877402] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:58.374 [2024-07-26 20:51:46.877410] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.374 [2024-07-26 20:51:46.877417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.374 [2024-07-26 20:51:46.877437] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.374 [2024-07-26 20:51:46.877443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:30:58.374 [2024-07-26 20:51:46.877449] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:58.374 [2024-07-26 20:51:46.877456] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:58.374 [2024-07-26 20:51:46.877462] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877476] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877483] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.375 [2024-07-26 20:51:46.877513] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.877519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.877571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877577] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877584] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877593] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182300 00:30:58.375 [2024-07-26 20:51:46.877628] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.877634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.877644] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:58.375 [2024-07-26 20:51:46.877654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877660] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877676] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:30:58.375 [2024-07-26 20:51:46.877717] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.877722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.877734] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877741] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877756] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:30:58.375 [2024-07-26 20:51:46.877793] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.877799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.877807] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877814] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877821] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877842] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877849] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877855] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:58.375 [2024-07-26 20:51:46.877861] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:58.375 [2024-07-26 20:51:46.877867] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:58.375 [2024-07-26 20:51:46.877881] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877888] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.375 [2024-07-26 20:51:46.877896] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.375 [2024-07-26 20:51:46.877913] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.877919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.877927] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877934] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.877939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.877945] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877955] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.877962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.375 [2024-07-26 20:51:46.877982] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.877988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.877994] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878003] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.375 [2024-07-26 20:51:46.878034] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.878039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.878046] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878055] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.375 [2024-07-26 20:51:46.878077] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.878082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.878089] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878102] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182300 00:30:58.375 [2024-07-26 20:51:46.878118] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182300 00:30:58.375 [2024-07-26 20:51:46.878134] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182300 00:30:58.375 [2024-07-26 20:51:46.878149] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182300 00:30:58.375 [2024-07-26 20:51:46.878167] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.878172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.878184] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878190] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.878196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.878206] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878213] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.878218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.878226] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182300 00:30:58.375 [2024-07-26 20:51:46.878232] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.375 [2024-07-26 20:51:46.878237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:58.375 [2024-07-26 20:51:46.878246] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182300 00:30:58.375 ===================================================== 00:30:58.375 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.376 ===================================================== 00:30:58.376 Controller Capabilities/Features 00:30:58.376 ================================ 00:30:58.376 Vendor ID: 8086 00:30:58.376 Subsystem Vendor ID: 8086 00:30:58.376 Serial Number: SPDK00000000000001 00:30:58.376 Model Number: SPDK bdev Controller 00:30:58.376 Firmware Version: 24.09 00:30:58.376 Recommended Arb Burst: 6 00:30:58.376 IEEE OUI Identifier: e4 d2 5c 00:30:58.376 Multi-path I/O 00:30:58.376 May have multiple subsystem ports: Yes 00:30:58.376 May have multiple controllers: Yes 00:30:58.376 Associated with SR-IOV VF: No 00:30:58.376 Max Data Transfer Size: 131072 00:30:58.376 Max Number of Namespaces: 32 00:30:58.376 Max Number of I/O Queues: 127 00:30:58.376 NVMe Specification Version (VS): 1.3 00:30:58.376 NVMe Specification Version (Identify): 1.3 00:30:58.376 Maximum Queue Entries: 128 00:30:58.376 Contiguous Queues Required: Yes 00:30:58.376 Arbitration Mechanisms Supported 00:30:58.376 Weighted Round Robin: Not Supported 00:30:58.376 Vendor Specific: Not Supported 00:30:58.376 Reset Timeout: 15000 ms 00:30:58.376 Doorbell Stride: 4 bytes 00:30:58.376 NVM Subsystem Reset: Not Supported 00:30:58.376 Command Sets Supported 00:30:58.376 NVM Command Set: Supported 00:30:58.376 Boot Partition: Not Supported 00:30:58.376 Memory Page Size Minimum: 4096 bytes 00:30:58.376 Memory Page Size Maximum: 4096 bytes 00:30:58.376 Persistent Memory Region: Not Supported 00:30:58.376 Optional Asynchronous Events Supported 00:30:58.376 Namespace Attribute Notices: Supported 00:30:58.376 Firmware Activation Notices: Not Supported 00:30:58.376 ANA Change Notices: Not Supported 00:30:58.376 PLE Aggregate Log Change Notices: Not Supported 00:30:58.376 LBA Status Info Alert Notices: Not Supported 00:30:58.376 EGE Aggregate Log Change Notices: Not Supported 00:30:58.376 Normal NVM Subsystem Shutdown event: Not Supported 00:30:58.376 Zone Descriptor Change Notices: Not Supported 00:30:58.376 Discovery Log Change Notices: Not Supported 00:30:58.376 Controller Attributes 00:30:58.376 128-bit Host Identifier: Supported 00:30:58.376 Non-Operational Permissive Mode: Not Supported 00:30:58.376 NVM Sets: Not Supported 00:30:58.376 Read Recovery Levels: Not Supported 00:30:58.376 Endurance Groups: Not Supported 00:30:58.376 Predictable Latency Mode: Not Supported 00:30:58.376 Traffic Based Keep ALive: Not Supported 00:30:58.376 Namespace Granularity: Not Supported 00:30:58.376 SQ Associations: Not Supported 00:30:58.376 UUID List: Not Supported 00:30:58.376 Multi-Domain Subsystem: Not Supported 00:30:58.376 Fixed Capacity Management: Not Supported 00:30:58.376 Variable Capacity Management: Not Supported 00:30:58.376 Delete Endurance Group: Not Supported 00:30:58.376 Delete NVM Set: Not Supported 00:30:58.376 Extended LBA Formats Supported: Not Supported 00:30:58.376 Flexible Data Placement Supported: Not Supported 00:30:58.376 00:30:58.376 Controller Memory Buffer Support 00:30:58.376 ================================ 00:30:58.376 Supported: No 00:30:58.376 00:30:58.376 Persistent Memory Region Support 00:30:58.376 ================================ 00:30:58.376 Supported: No 00:30:58.376 00:30:58.376 Admin Command Set Attributes 00:30:58.376 ============================ 00:30:58.376 Security Send/Receive: Not Supported 00:30:58.376 Format NVM: Not Supported 00:30:58.376 Firmware Activate/Download: Not Supported 00:30:58.376 Namespace Management: Not Supported 00:30:58.376 Device Self-Test: Not Supported 00:30:58.376 Directives: Not Supported 00:30:58.376 NVMe-MI: Not Supported 00:30:58.376 Virtualization Management: Not Supported 00:30:58.376 Doorbell Buffer Config: Not Supported 00:30:58.376 Get LBA Status Capability: Not Supported 00:30:58.376 Command & Feature Lockdown Capability: Not Supported 00:30:58.376 Abort Command Limit: 4 00:30:58.376 Async Event Request Limit: 4 00:30:58.376 Number of Firmware Slots: N/A 00:30:58.376 Firmware Slot 1 Read-Only: N/A 00:30:58.376 Firmware Activation Without Reset: N/A 00:30:58.376 Multiple Update Detection Support: N/A 00:30:58.376 Firmware Update Granularity: No Information Provided 00:30:58.376 Per-Namespace SMART Log: No 00:30:58.376 Asymmetric Namespace Access Log Page: Not Supported 00:30:58.376 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:58.376 Command Effects Log Page: Supported 00:30:58.376 Get Log Page Extended Data: Supported 00:30:58.376 Telemetry Log Pages: Not Supported 00:30:58.376 Persistent Event Log Pages: Not Supported 00:30:58.376 Supported Log Pages Log Page: May Support 00:30:58.376 Commands Supported & Effects Log Page: Not Supported 00:30:58.376 Feature Identifiers & Effects Log Page:May Support 00:30:58.376 NVMe-MI Commands & Effects Log Page: May Support 00:30:58.376 Data Area 4 for Telemetry Log: Not Supported 00:30:58.376 Error Log Page Entries Supported: 128 00:30:58.376 Keep Alive: Supported 00:30:58.376 Keep Alive Granularity: 10000 ms 00:30:58.376 00:30:58.376 NVM Command Set Attributes 00:30:58.376 ========================== 00:30:58.376 Submission Queue Entry Size 00:30:58.376 Max: 64 00:30:58.376 Min: 64 00:30:58.376 Completion Queue Entry Size 00:30:58.376 Max: 16 00:30:58.376 Min: 16 00:30:58.376 Number of Namespaces: 32 00:30:58.376 Compare Command: Supported 00:30:58.376 Write Uncorrectable Command: Not Supported 00:30:58.376 Dataset Management Command: Supported 00:30:58.376 Write Zeroes Command: Supported 00:30:58.376 Set Features Save Field: Not Supported 00:30:58.376 Reservations: Supported 00:30:58.376 Timestamp: Not Supported 00:30:58.376 Copy: Supported 00:30:58.376 Volatile Write Cache: Present 00:30:58.376 Atomic Write Unit (Normal): 1 00:30:58.376 Atomic Write Unit (PFail): 1 00:30:58.376 Atomic Compare & Write Unit: 1 00:30:58.376 Fused Compare & Write: Supported 00:30:58.376 Scatter-Gather List 00:30:58.376 SGL Command Set: Supported 00:30:58.376 SGL Keyed: Supported 00:30:58.376 SGL Bit Bucket Descriptor: Not Supported 00:30:58.376 SGL Metadata Pointer: Not Supported 00:30:58.376 Oversized SGL: Not Supported 00:30:58.376 SGL Metadata Address: Not Supported 00:30:58.376 SGL Offset: Supported 00:30:58.376 Transport SGL Data Block: Not Supported 00:30:58.376 Replay Protected Memory Block: Not Supported 00:30:58.376 00:30:58.376 Firmware Slot Information 00:30:58.376 ========================= 00:30:58.376 Active slot: 1 00:30:58.376 Slot 1 Firmware Revision: 24.09 00:30:58.376 00:30:58.376 00:30:58.376 Commands Supported and Effects 00:30:58.376 ============================== 00:30:58.376 Admin Commands 00:30:58.376 -------------- 00:30:58.376 Get Log Page (02h): Supported 00:30:58.376 Identify (06h): Supported 00:30:58.376 Abort (08h): Supported 00:30:58.376 Set Features (09h): Supported 00:30:58.376 Get Features (0Ah): Supported 00:30:58.376 Asynchronous Event Request (0Ch): Supported 00:30:58.376 Keep Alive (18h): Supported 00:30:58.376 I/O Commands 00:30:58.376 ------------ 00:30:58.376 Flush (00h): Supported LBA-Change 00:30:58.376 Write (01h): Supported LBA-Change 00:30:58.376 Read (02h): Supported 00:30:58.376 Compare (05h): Supported 00:30:58.376 Write Zeroes (08h): Supported LBA-Change 00:30:58.376 Dataset Management (09h): Supported LBA-Change 00:30:58.376 Copy (19h): Supported LBA-Change 00:30:58.376 00:30:58.376 Error Log 00:30:58.377 ========= 00:30:58.377 00:30:58.377 Arbitration 00:30:58.377 =========== 00:30:58.377 Arbitration Burst: 1 00:30:58.377 00:30:58.377 Power Management 00:30:58.377 ================ 00:30:58.377 Number of Power States: 1 00:30:58.377 Current Power State: Power State #0 00:30:58.377 Power State #0: 00:30:58.377 Max Power: 0.00 W 00:30:58.377 Non-Operational State: Operational 00:30:58.377 Entry Latency: Not Reported 00:30:58.377 Exit Latency: Not Reported 00:30:58.377 Relative Read Throughput: 0 00:30:58.377 Relative Read Latency: 0 00:30:58.377 Relative Write Throughput: 0 00:30:58.377 Relative Write Latency: 0 00:30:58.377 Idle Power: Not Reported 00:30:58.377 Active Power: Not Reported 00:30:58.377 Non-Operational Permissive Mode: Not Supported 00:30:58.377 00:30:58.377 Health Information 00:30:58.377 ================== 00:30:58.377 Critical Warnings: 00:30:58.377 Available Spare Space: OK 00:30:58.377 Temperature: OK 00:30:58.377 Device Reliability: OK 00:30:58.377 Read Only: No 00:30:58.377 Volatile Memory Backup: OK 00:30:58.377 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:58.377 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:58.377 Available Spare: 0% 00:30:58.377 Available Spare Threshold: 0% 00:30:58.377 Life Percentage [2024-07-26 20:51:46.878323] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878352] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878364] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878392] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:58.377 [2024-07-26 20:51:46.878401] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 262 doesn't match qid 00:30:58.377 [2024-07-26 20:51:46.878415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32595 cdw0:5 sqhd:2f40 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878422] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 262 doesn't match qid 00:30:58.377 [2024-07-26 20:51:46.878431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32595 cdw0:5 sqhd:2f40 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878437] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 262 doesn't match qid 00:30:58.377 [2024-07-26 20:51:46.878445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32595 cdw0:5 sqhd:2f40 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878452] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 262 doesn't match qid 00:30:58.377 [2024-07-26 20:51:46.878460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32595 cdw0:5 sqhd:2f40 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878469] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878497] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878512] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878527] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878545] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878558] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:58.377 [2024-07-26 20:51:46.878564] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:58.377 [2024-07-26 20:51:46.878571] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878580] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878608] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878621] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878635] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878665] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878677] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878687] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878711] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878724] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878733] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878762] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878777] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878786] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878811] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878825] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878835] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878862] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878875] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878884] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878913] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878926] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878934] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.878966] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.878971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.878978] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878987] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.878995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.879018] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.879024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:58.377 [2024-07-26 20:51:46.879032] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.879040] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.377 [2024-07-26 20:51:46.879048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.377 [2024-07-26 20:51:46.879068] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.377 [2024-07-26 20:51:46.879074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879080] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879089] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879120] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879132] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879141] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879168] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879181] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879189] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879213] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879225] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879234] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879259] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879272] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879280] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879308] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879321] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879329] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879359] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879371] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879379] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879409] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879421] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879430] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879459] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879471] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879480] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879503] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879515] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879524] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879550] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879562] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879570] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879594] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879606] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879614] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879646] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879658] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879668] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879698] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879710] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879719] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879746] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879758] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879767] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879794] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879806] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879815] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879842] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879854] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879863] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879895] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.378 [2024-07-26 20:51:46.879900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:30:58.378 [2024-07-26 20:51:46.879907] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879915] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.378 [2024-07-26 20:51:46.879923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.378 [2024-07-26 20:51:46.879941] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.879947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.879953] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.879963] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.879971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.879991] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.879996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880003] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880011] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880041] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880053] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880062] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880091] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880103] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880112] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880137] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880149] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880158] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880180] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880192] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880201] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880222] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880237] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880246] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880270] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880282] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880291] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880320] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880332] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880341] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880364] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880376] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880385] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880412] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880424] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880433] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880464] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880476] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880485] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880513] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880526] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880535] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880559] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880571] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880579] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.880587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.880607] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.880612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.880619] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.884633] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.884643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:30:58.379 [2024-07-26 20:51:46.884665] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:30:58.379 [2024-07-26 20:51:46.884671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0005 p:0 m:0 dnr:0 00:30:58.379 [2024-07-26 20:51:46.884677] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182300 00:30:58.379 [2024-07-26 20:51:46.884684] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:30:58.638 Used: 0% 00:30:58.638 Data Units Read: 0 00:30:58.638 Data Units Written: 0 00:30:58.638 Host Read Commands: 0 00:30:58.638 Host Write Commands: 0 00:30:58.638 Controller Busy Time: 0 minutes 00:30:58.638 Power Cycles: 0 00:30:58.638 Power On Hours: 0 hours 00:30:58.638 Unsafe Shutdowns: 0 00:30:58.638 Unrecoverable Media Errors: 0 00:30:58.638 Lifetime Error Log Entries: 0 00:30:58.638 Warning Temperature Time: 0 minutes 00:30:58.638 Critical Temperature Time: 0 minutes 00:30:58.638 00:30:58.638 Number of Queues 00:30:58.638 ================ 00:30:58.638 Number of I/O Submission Queues: 127 00:30:58.638 Number of I/O Completion Queues: 127 00:30:58.638 00:30:58.638 Active Namespaces 00:30:58.638 ================= 00:30:58.638 Namespace ID:1 00:30:58.638 Error Recovery Timeout: Unlimited 00:30:58.638 Command Set Identifier: NVM (00h) 00:30:58.638 Deallocate: Supported 00:30:58.638 Deallocated/Unwritten Error: Not Supported 00:30:58.638 Deallocated Read Value: Unknown 00:30:58.638 Deallocate in Write Zeroes: Not Supported 00:30:58.638 Deallocated Guard Field: 0xFFFF 00:30:58.638 Flush: Supported 00:30:58.638 Reservation: Supported 00:30:58.638 Namespace Sharing Capabilities: Multiple Controllers 00:30:58.638 Size (in LBAs): 131072 (0GiB) 00:30:58.638 Capacity (in LBAs): 131072 (0GiB) 00:30:58.638 Utilization (in LBAs): 131072 (0GiB) 00:30:58.638 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:58.638 EUI64: ABCDEF0123456789 00:30:58.638 UUID: 04375ff7-354d-4451-bc38-1e04c4f7e784 00:30:58.638 Thin Provisioning: Not Supported 00:30:58.638 Per-NS Atomic Units: Yes 00:30:58.638 Atomic Boundary Size (Normal): 0 00:30:58.638 Atomic Boundary Size (PFail): 0 00:30:58.638 Atomic Boundary Offset: 0 00:30:58.639 Maximum Single Source Range Length: 65535 00:30:58.639 Maximum Copy Length: 65535 00:30:58.639 Maximum Source Range Count: 1 00:30:58.639 NGUID/EUI64 Never Reused: No 00:30:58.639 Namespace Write Protected: No 00:30:58.639 Number of LBA Formats: 1 00:30:58.639 Current LBA Format: LBA Format #00 00:30:58.639 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:58.639 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:58.639 rmmod nvme_rdma 00:30:58.639 rmmod nvme_fabrics 00:30:58.639 20:51:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1270815 ']' 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1270815 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1270815 ']' 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1270815 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270815 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270815' 00:30:58.639 killing process with pid 1270815 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1270815 00:30:58.639 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1270815 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:58.897 00:30:58.897 real 0m10.325s 00:30:58.897 user 0m9.091s 00:30:58.897 sys 0m6.772s 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:58.897 ************************************ 00:30:58.897 END TEST nvmf_identify 00:30:58.897 ************************************ 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.897 ************************************ 00:30:58.897 START TEST nvmf_perf 00:30:58.897 ************************************ 00:30:58.897 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:30:59.156 * Looking for test storage... 00:30:59.156 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:59.156 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.156 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:59.156 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.156 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:59.157 20:51:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:07.276 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:07.276 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:07.276 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:07.277 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:07.277 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:07.277 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:07.277 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:07.277 altname enp217s0f0np0 00:31:07.277 altname ens818f0np0 00:31:07.277 inet 192.168.100.8/24 scope global mlx_0_0 00:31:07.277 valid_lft forever preferred_lft forever 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:07.277 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:07.277 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:07.277 altname enp217s0f1np1 00:31:07.277 altname ens818f1np1 00:31:07.277 inet 192.168.100.9/24 scope global mlx_0_1 00:31:07.277 valid_lft forever preferred_lft forever 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:07.277 192.168.100.9' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:07.277 192.168.100.9' 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:07.277 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:07.278 192.168.100.9' 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1275019 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1275019 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1275019 ']' 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:07.278 20:51:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:07.278 [2024-07-26 20:51:55.680110] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:31:07.278 [2024-07-26 20:51:55.680171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.278 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.278 [2024-07-26 20:51:55.765824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:07.278 [2024-07-26 20:51:55.805951] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.278 [2024-07-26 20:51:55.805990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.278 [2024-07-26 20:51:55.806000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.278 [2024-07-26 20:51:55.806009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.278 [2024-07-26 20:51:55.806017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.278 [2024-07-26 20:51:55.806064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.278 [2024-07-26 20:51:55.806158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:07.278 [2024-07-26 20:51:55.806241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:07.278 [2024-07-26 20:51:55.806243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.216 20:51:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:08.216 20:51:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:31:08.216 20:51:56 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:08.216 20:51:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.216 20:51:56 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:08.216 20:51:56 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.216 20:51:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:08.216 20:51:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:31:11.506 20:51:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:31:11.765 [2024-07-26 20:52:00.114052] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:31:11.765 [2024-07-26 20:52:00.135527] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd366a0/0xd448c0) succeed. 00:31:11.765 [2024-07-26 20:52:00.145254] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd37ce0/0xdc4900) succeed. 00:31:11.765 20:52:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:12.023 20:52:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:12.023 20:52:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:12.282 20:52:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:12.282 20:52:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:12.282 20:52:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:12.541 [2024-07-26 20:52:00.983145] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:12.541 20:52:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:12.800 20:52:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:31:12.800 20:52:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:31:12.800 20:52:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:12.800 20:52:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:31:14.192 Initializing NVMe Controllers 00:31:14.192 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:31:14.192 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:31:14.192 Initialization complete. Launching workers. 00:31:14.192 ======================================================== 00:31:14.192 Latency(us) 00:31:14.192 Device Information : IOPS MiB/s Average min max 00:31:14.192 PCIE (0000:d8:00.0) NSID 1 from core 0: 101901.23 398.05 313.65 29.32 8206.96 00:31:14.192 ======================================================== 00:31:14.192 Total : 101901.23 398.05 313.65 29.32 8206.96 00:31:14.192 00:31:14.192 20:52:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:14.192 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.482 Initializing NVMe Controllers 00:31:17.482 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.482 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.482 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:17.482 Initialization complete. Launching workers. 00:31:17.482 ======================================================== 00:31:17.482 Latency(us) 00:31:17.482 Device Information : IOPS MiB/s Average min max 00:31:17.482 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6799.87 26.56 146.86 46.58 5057.49 00:31:17.482 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5306.17 20.73 187.51 65.89 5080.98 00:31:17.482 ======================================================== 00:31:17.482 Total : 12106.05 47.29 164.68 46.58 5080.98 00:31:17.482 00:31:17.482 20:52:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:17.482 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.774 Initializing NVMe Controllers 00:31:20.774 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.774 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.774 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:20.774 Initialization complete. Launching workers. 00:31:20.774 ======================================================== 00:31:20.774 Latency(us) 00:31:20.774 Device Information : IOPS MiB/s Average min max 00:31:20.774 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18565.38 72.52 1724.06 486.64 9033.53 00:31:20.774 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4051.86 15.83 7957.01 4960.07 11025.38 00:31:20.774 ======================================================== 00:31:20.774 Total : 22617.24 88.35 2840.69 486.64 11025.38 00:31:20.774 00:31:20.774 20:52:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:31:20.774 20:52:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:20.774 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.008 Initializing NVMe Controllers 00:31:25.008 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.008 Controller IO queue size 128, less than required. 00:31:25.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.008 Controller IO queue size 128, less than required. 00:31:25.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.008 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.008 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:25.008 Initialization complete. Launching workers. 00:31:25.008 ======================================================== 00:31:25.008 Latency(us) 00:31:25.008 Device Information : IOPS MiB/s Average min max 00:31:25.008 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4053.43 1013.36 31699.57 14395.88 72191.34 00:31:25.008 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4092.34 1023.08 31176.52 13958.77 51792.10 00:31:25.008 ======================================================== 00:31:25.008 Total : 8145.77 2036.44 31436.79 13958.77 72191.34 00:31:25.008 00:31:25.266 20:52:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:31:25.266 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.525 No valid NVMe controllers or AIO or URING devices found 00:31:25.525 Initializing NVMe Controllers 00:31:25.525 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.525 Controller IO queue size 128, less than required. 00:31:25.525 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.525 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:25.525 Controller IO queue size 128, less than required. 00:31:25.525 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.525 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:25.525 WARNING: Some requested NVMe devices were skipped 00:31:25.525 20:52:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:31:25.525 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.798 Initializing NVMe Controllers 00:31:30.798 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:30.799 Controller IO queue size 128, less than required. 00:31:30.799 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:30.799 Controller IO queue size 128, less than required. 00:31:30.799 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:30.799 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:30.799 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:30.799 Initialization complete. Launching workers. 00:31:30.799 00:31:30.799 ==================== 00:31:30.799 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:30.799 RDMA transport: 00:31:30.799 dev name: mlx5_0 00:31:30.799 polls: 412716 00:31:30.799 idle_polls: 408923 00:31:30.799 completions: 45790 00:31:30.799 queued_requests: 1 00:31:30.799 total_send_wrs: 22895 00:31:30.799 send_doorbell_updates: 3586 00:31:30.799 total_recv_wrs: 23022 00:31:30.799 recv_doorbell_updates: 3590 00:31:30.799 --------------------------------- 00:31:30.799 00:31:30.799 ==================== 00:31:30.799 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:30.799 RDMA transport: 00:31:30.799 dev name: mlx5_0 00:31:30.799 polls: 411238 00:31:30.799 idle_polls: 410948 00:31:30.799 completions: 20698 00:31:30.799 queued_requests: 1 00:31:30.799 total_send_wrs: 10349 00:31:30.799 send_doorbell_updates: 261 00:31:30.799 total_recv_wrs: 10476 00:31:30.799 recv_doorbell_updates: 262 00:31:30.799 --------------------------------- 00:31:30.799 ======================================================== 00:31:30.799 Latency(us) 00:31:30.799 Device Information : IOPS MiB/s Average min max 00:31:30.799 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5723.50 1430.87 22413.18 11034.78 53835.57 00:31:30.799 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2587.00 646.75 49387.37 25483.41 73188.62 00:31:30.799 ======================================================== 00:31:30.799 Total : 8310.50 2077.62 30810.06 11034.78 73188.62 00:31:30.799 00:31:30.799 20:52:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:30.799 20:52:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.799 20:52:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:30.799 20:52:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:31:30.799 20:52:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:37.370 20:52:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=e3db2f54-8603-4f4c-b39c-d03ed1c792ba 00:31:37.370 20:52:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb e3db2f54-8603-4f4c-b39c-d03ed1c792ba 00:31:37.370 20:52:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=e3db2f54-8603-4f4c-b39c-d03ed1c792ba 00:31:37.370 20:52:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:37.370 20:52:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:37.370 20:52:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:37.370 20:52:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:37.370 { 00:31:37.370 "uuid": "e3db2f54-8603-4f4c-b39c-d03ed1c792ba", 00:31:37.370 "name": "lvs_0", 00:31:37.370 "base_bdev": "Nvme0n1", 00:31:37.370 "total_data_clusters": 476466, 00:31:37.370 "free_clusters": 476466, 00:31:37.370 "block_size": 512, 00:31:37.370 "cluster_size": 4194304 00:31:37.370 } 00:31:37.370 ]' 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e3db2f54-8603-4f4c-b39c-d03ed1c792ba") .free_clusters' 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=476466 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e3db2f54-8603-4f4c-b39c-d03ed1c792ba") .cluster_size' 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1905864 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1905864 00:31:37.370 1905864 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3db2f54-8603-4f4c-b39c-d03ed1c792ba lbd_0 20480 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7e9f7660-0688-41c6-94af-4b54c03c7b1e 00:31:37.370 20:52:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7e9f7660-0688-41c6-94af-4b54c03c7b1e lvs_n_0 00:31:38.750 20:52:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1647e964-8848-4346-8c34-16b0a63df092 00:31:38.750 20:52:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1647e964-8848-4346-8c34-16b0a63df092 00:31:38.750 20:52:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=1647e964-8848-4346-8c34-16b0a63df092 00:31:38.750 20:52:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:38.750 20:52:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:38.750 20:52:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:38.750 20:52:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:38.750 { 00:31:38.750 "uuid": "e3db2f54-8603-4f4c-b39c-d03ed1c792ba", 00:31:38.750 "name": "lvs_0", 00:31:38.750 "base_bdev": "Nvme0n1", 00:31:38.750 "total_data_clusters": 476466, 00:31:38.750 "free_clusters": 471346, 00:31:38.750 "block_size": 512, 00:31:38.750 "cluster_size": 4194304 00:31:38.750 }, 00:31:38.750 { 00:31:38.750 "uuid": "1647e964-8848-4346-8c34-16b0a63df092", 00:31:38.750 "name": "lvs_n_0", 00:31:38.750 "base_bdev": "7e9f7660-0688-41c6-94af-4b54c03c7b1e", 00:31:38.750 "total_data_clusters": 5114, 00:31:38.750 "free_clusters": 5114, 00:31:38.750 "block_size": 512, 00:31:38.750 "cluster_size": 4194304 00:31:38.750 } 00:31:38.750 ]' 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1647e964-8848-4346-8c34-16b0a63df092") .free_clusters' 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1647e964-8848-4346-8c34-16b0a63df092") .cluster_size' 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:38.750 20456 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:38.750 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1647e964-8848-4346-8c34-16b0a63df092 lbd_nest_0 20456 00:31:39.009 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=492c5991-c47e-479a-95e0-9fb9e139e520 00:31:39.009 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:39.268 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:39.268 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 492c5991-c47e-479a-95e0-9fb9e139e520 00:31:39.528 20:52:27 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:39.528 20:52:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:39.528 20:52:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:39.528 20:52:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:39.528 20:52:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:39.528 20:52:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:39.528 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.770 Initializing NVMe Controllers 00:31:51.770 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.770 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:51.770 Initialization complete. Launching workers. 00:31:51.770 ======================================================== 00:31:51.770 Latency(us) 00:31:51.770 Device Information : IOPS MiB/s Average min max 00:31:51.770 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5861.20 2.86 170.13 68.42 8060.64 00:31:51.770 ======================================================== 00:31:51.770 Total : 5861.20 2.86 170.13 68.42 8060.64 00:31:51.770 00:31:51.770 20:52:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:51.770 20:52:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:51.770 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.008 Initializing NVMe Controllers 00:32:04.008 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:04.008 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:04.008 Initialization complete. Launching workers. 00:32:04.009 ======================================================== 00:32:04.009 Latency(us) 00:32:04.009 Device Information : IOPS MiB/s Average min max 00:32:04.009 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2667.42 333.43 374.53 154.67 8108.58 00:32:04.009 ======================================================== 00:32:04.009 Total : 2667.42 333.43 374.53 154.67 8108.58 00:32:04.009 00:32:04.009 20:52:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:04.009 20:52:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:04.009 20:52:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:04.009 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.988 Initializing NVMe Controllers 00:32:13.988 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.988 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:13.988 Initialization complete. Launching workers. 00:32:13.988 ======================================================== 00:32:13.988 Latency(us) 00:32:13.988 Device Information : IOPS MiB/s Average min max 00:32:13.988 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11444.60 5.59 2795.61 976.24 9072.67 00:32:13.988 ======================================================== 00:32:13.988 Total : 11444.60 5.59 2795.61 976.24 9072.67 00:32:13.988 00:32:13.988 20:53:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:13.988 20:53:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:13.988 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.196 Initializing NVMe Controllers 00:32:26.196 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.196 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:26.196 Initialization complete. Launching workers. 00:32:26.196 ======================================================== 00:32:26.196 Latency(us) 00:32:26.196 Device Information : IOPS MiB/s Average min max 00:32:26.196 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3960.00 495.00 8085.34 5904.22 19979.88 00:32:26.196 ======================================================== 00:32:26.196 Total : 3960.00 495.00 8085.34 5904.22 19979.88 00:32:26.196 00:32:26.196 20:53:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:26.197 20:53:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:26.197 20:53:13 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:26.197 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.460 Initializing NVMe Controllers 00:32:38.460 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:38.460 Controller IO queue size 128, less than required. 00:32:38.460 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:38.460 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:38.460 Initialization complete. Launching workers. 00:32:38.460 ======================================================== 00:32:38.460 Latency(us) 00:32:38.460 Device Information : IOPS MiB/s Average min max 00:32:38.460 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19009.10 9.28 6736.24 1937.96 14767.12 00:32:38.460 ======================================================== 00:32:38.460 Total : 19009.10 9.28 6736.24 1937.96 14767.12 00:32:38.460 00:32:38.460 20:53:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:38.460 20:53:24 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:38.460 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.434 Initializing NVMe Controllers 00:32:48.434 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:48.434 Controller IO queue size 128, less than required. 00:32:48.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:48.434 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:48.434 Initialization complete. Launching workers. 00:32:48.435 ======================================================== 00:32:48.435 Latency(us) 00:32:48.435 Device Information : IOPS MiB/s Average min max 00:32:48.435 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11133.83 1391.73 11495.91 3384.04 23760.42 00:32:48.435 ======================================================== 00:32:48.435 Total : 11133.83 1391.73 11495.91 3384.04 23760.42 00:32:48.435 00:32:48.435 20:53:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:48.435 20:53:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 492c5991-c47e-479a-95e0-9fb9e139e520 00:32:48.435 20:53:36 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:48.693 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e9f7660-0688-41c6-94af-4b54c03c7b1e 00:32:48.953 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:49.212 rmmod nvme_rdma 00:32:49.212 rmmod nvme_fabrics 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1275019 ']' 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1275019 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1275019 ']' 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1275019 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275019 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275019' 00:32:49.212 killing process with pid 1275019 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1275019 00:32:49.212 20:53:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1275019 00:32:51.747 20:53:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:51.747 20:53:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:51.747 00:32:51.747 real 1m52.835s 00:32:51.747 user 7m1.432s 00:32:51.747 sys 0m8.393s 00:32:51.747 20:53:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:51.747 20:53:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:51.747 ************************************ 00:32:51.747 END TEST nvmf_perf 00:32:51.747 ************************************ 00:32:51.747 20:53:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:32:51.747 20:53:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:51.747 20:53:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:51.747 20:53:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.005 ************************************ 00:32:52.005 START TEST nvmf_fio_host 00:32:52.005 ************************************ 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:32:52.005 * Looking for test storage... 00:32:52.005 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.005 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:52.006 20:53:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:00.123 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:00.124 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:00.124 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:00.124 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:00.124 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:00.124 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:00.124 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:00.124 altname enp217s0f0np0 00:33:00.124 altname ens818f0np0 00:33:00.124 inet 192.168.100.8/24 scope global mlx_0_0 00:33:00.124 valid_lft forever preferred_lft forever 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:00.124 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:00.124 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:00.124 altname enp217s0f1np1 00:33:00.124 altname ens818f1np1 00:33:00.124 inet 192.168.100.9/24 scope global mlx_0_1 00:33:00.124 valid_lft forever preferred_lft forever 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:33:00.124 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:00.125 192.168.100.9' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:00.125 192.168.100.9' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:00.125 192.168.100.9' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1296136 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1296136 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1296136 ']' 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.125 20:53:48 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.125 [2024-07-26 20:53:48.621391] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:33:00.125 [2024-07-26 20:53:48.621446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.125 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.384 [2024-07-26 20:53:48.706900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:00.384 [2024-07-26 20:53:48.745901] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.384 [2024-07-26 20:53:48.745945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.384 [2024-07-26 20:53:48.745954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.384 [2024-07-26 20:53:48.745963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.384 [2024-07-26 20:53:48.745970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.384 [2024-07-26 20:53:48.746022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.385 [2024-07-26 20:53:48.746119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.385 [2024-07-26 20:53:48.746202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.385 [2024-07-26 20:53:48.746204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.953 20:53:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.953 20:53:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:33:00.953 20:53:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:01.213 [2024-07-26 20:53:49.610055] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c26ea0/0x1c2b390) succeed. 00:33:01.213 [2024-07-26 20:53:49.619403] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c284e0/0x1c6ca20) succeed. 00:33:01.213 20:53:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:01.213 20:53:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:01.213 20:53:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.472 20:53:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:01.472 Malloc1 00:33:01.472 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:01.731 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:01.990 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:02.250 [2024-07-26 20:53:50.547714] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:02.250 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:02.521 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:02.521 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:02.521 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:02.521 20:53:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:02.779 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:02.779 fio-3.35 00:33:02.779 Starting 1 thread 00:33:02.779 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.335 00:33:05.335 test: (groupid=0, jobs=1): err= 0: pid=1296640: Fri Jul 26 20:53:53 2024 00:33:05.335 read: IOPS=18.0k, BW=70.5MiB/s (73.9MB/s)(141MiB/2004msec) 00:33:05.335 slat (nsec): min=1338, max=37740, avg=1488.53, stdev=424.13 00:33:05.335 clat (usec): min=1889, max=6361, avg=3519.25, stdev=84.44 00:33:05.335 lat (usec): min=1910, max=6362, avg=3520.74, stdev=84.36 00:33:05.335 clat percentiles (usec): 00:33:05.335 | 1.00th=[ 3458], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3490], 00:33:05.335 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:33:05.335 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3556], 95.00th=[ 3556], 00:33:05.335 | 99.00th=[ 3589], 99.50th=[ 3818], 99.90th=[ 4555], 99.95th=[ 5407], 00:33:05.335 | 99.99th=[ 6325] 00:33:05.335 bw ( KiB/s): min=70648, max=73008, per=100.00%, avg=72188.00, stdev=1054.84, samples=4 00:33:05.335 iops : min=17662, max=18252, avg=18047.00, stdev=263.71, samples=4 00:33:05.335 write: IOPS=18.1k, BW=70.6MiB/s (74.0MB/s)(141MiB/2004msec); 0 zone resets 00:33:05.335 slat (nsec): min=1374, max=17660, avg=1568.34, stdev=424.93 00:33:05.335 clat (usec): min=1921, max=6348, avg=3518.69, stdev=85.12 00:33:05.335 lat (usec): min=1931, max=6350, avg=3520.26, stdev=85.05 00:33:05.335 clat percentiles (usec): 00:33:05.335 | 1.00th=[ 3458], 5.00th=[ 3490], 10.00th=[ 3490], 20.00th=[ 3490], 00:33:05.335 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:33:05.335 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3523], 95.00th=[ 3556], 00:33:05.335 | 99.00th=[ 3621], 99.50th=[ 3851], 99.90th=[ 4555], 99.95th=[ 5473], 00:33:05.335 | 99.99th=[ 6325] 00:33:05.335 bw ( KiB/s): min=70632, max=72984, per=100.00%, avg=72282.00, stdev=1116.76, samples=4 00:33:05.335 iops : min=17658, max=18246, avg=18070.50, stdev=279.19, samples=4 00:33:05.335 lat (msec) : 2=0.01%, 4=99.83%, 10=0.16% 00:33:05.335 cpu : usr=99.50%, sys=0.10%, ctx=16, majf=0, minf=4 00:33:05.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:05.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:05.335 issued rwts: total=36151,36211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:05.335 00:33:05.335 Run status group 0 (all jobs): 00:33:05.335 READ: bw=70.5MiB/s (73.9MB/s), 70.5MiB/s-70.5MiB/s (73.9MB/s-73.9MB/s), io=141MiB (148MB), run=2004-2004msec 00:33:05.335 WRITE: bw=70.6MiB/s (74.0MB/s), 70.6MiB/s-70.6MiB/s (74.0MB/s-74.0MB/s), io=141MiB (148MB), run=2004-2004msec 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:05.335 20:53:53 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:05.335 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:05.335 fio-3.35 00:33:05.335 Starting 1 thread 00:33:05.335 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.854 00:33:07.854 test: (groupid=0, jobs=1): err= 0: pid=1297299: Fri Jul 26 20:53:56 2024 00:33:07.854 read: IOPS=14.5k, BW=227MiB/s (238MB/s)(445MiB/1960msec) 00:33:07.854 slat (nsec): min=2230, max=53546, avg=2536.96, stdev=950.93 00:33:07.854 clat (usec): min=464, max=7714, avg=1534.17, stdev=1203.11 00:33:07.854 lat (usec): min=467, max=7734, avg=1536.70, stdev=1203.40 00:33:07.854 clat percentiles (usec): 00:33:07.854 | 1.00th=[ 668], 5.00th=[ 758], 10.00th=[ 824], 20.00th=[ 898], 00:33:07.854 | 30.00th=[ 963], 40.00th=[ 1045], 50.00th=[ 1139], 60.00th=[ 1254], 00:33:07.854 | 70.00th=[ 1385], 80.00th=[ 1549], 90.00th=[ 3228], 95.00th=[ 4817], 00:33:07.854 | 99.00th=[ 6063], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7308], 00:33:07.854 | 99.99th=[ 7635] 00:33:07.854 bw ( KiB/s): min=112095, max=117248, per=49.45%, avg=114999.75, stdev=2267.90, samples=4 00:33:07.854 iops : min= 7005, max= 7328, avg=7187.25, stdev=142.14, samples=4 00:33:07.854 write: IOPS=8261, BW=129MiB/s (135MB/s)(233MiB/1808msec); 0 zone resets 00:33:07.854 slat (usec): min=26, max=130, avg=28.54, stdev= 4.93 00:33:07.854 clat (usec): min=4079, max=19228, avg=12516.33, stdev=1800.05 00:33:07.854 lat (usec): min=4106, max=19257, avg=12544.86, stdev=1799.86 00:33:07.854 clat percentiles (usec): 00:33:07.854 | 1.00th=[ 8356], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11076], 00:33:07.854 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:33:07.854 | 70.00th=[13304], 80.00th=[13960], 90.00th=[14877], 95.00th=[15664], 00:33:07.854 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:33:07.854 | 99.99th=[19268] 00:33:07.854 bw ( KiB/s): min=113181, max=122080, per=89.71%, avg=118583.25, stdev=3940.65, samples=4 00:33:07.854 iops : min= 7073, max= 7630, avg=7411.25, stdev=246.66, samples=4 00:33:07.854 lat (usec) : 500=0.01%, 750=2.82%, 1000=20.21% 00:33:07.854 lat (msec) : 2=35.01%, 4=1.99%, 10=7.69%, 20=32.27% 00:33:07.854 cpu : usr=96.36%, sys=2.00%, ctx=184, majf=0, minf=3 00:33:07.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:33:07.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:07.854 issued rwts: total=28486,14937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:07.854 00:33:07.854 Run status group 0 (all jobs): 00:33:07.854 READ: bw=227MiB/s (238MB/s), 227MiB/s-227MiB/s (238MB/s-238MB/s), io=445MiB (467MB), run=1960-1960msec 00:33:07.854 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=233MiB (245MB), run=1808-1808msec 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:07.854 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:33:08.110 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:33:08.110 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:33:08.110 20:53:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:33:11.377 Nvme0n1 00:33:11.377 20:53:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:16.622 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=c29dc882-d7cd-41a7-a15e-e4a7ca213e0b 00:33:16.622 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb c29dc882-d7cd-41a7-a15e-e4a7ca213e0b 00:33:16.622 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=c29dc882-d7cd-41a7-a15e-e4a7ca213e0b 00:33:16.622 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:16.622 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:16.622 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:16.622 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:16.878 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:16.878 { 00:33:16.878 "uuid": "c29dc882-d7cd-41a7-a15e-e4a7ca213e0b", 00:33:16.878 "name": "lvs_0", 00:33:16.878 "base_bdev": "Nvme0n1", 00:33:16.878 "total_data_clusters": 1862, 00:33:16.878 "free_clusters": 1862, 00:33:16.878 "block_size": 512, 00:33:16.878 "cluster_size": 1073741824 00:33:16.878 } 00:33:16.878 ]' 00:33:16.878 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c29dc882-d7cd-41a7-a15e-e4a7ca213e0b") .free_clusters' 00:33:16.878 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1862 00:33:16.878 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c29dc882-d7cd-41a7-a15e-e4a7ca213e0b") .cluster_size' 00:33:16.878 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:16.878 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1906688 00:33:16.878 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1906688 00:33:16.878 1906688 00:33:16.878 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:33:17.438 a1e03957-29ab-4614-91dd-0a45be3d6b0f 00:33:17.438 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:17.694 20:54:05 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:17.694 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:17.950 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:17.951 20:54:06 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:18.207 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:18.207 fio-3.35 00:33:18.207 Starting 1 thread 00:33:18.463 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.981 00:33:20.981 test: (groupid=0, jobs=1): err= 0: pid=1300126: Fri Jul 26 20:54:09 2024 00:33:20.982 read: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(79.1MiB/2005msec) 00:33:20.982 slat (nsec): min=1333, max=25299, avg=1443.54, stdev=380.84 00:33:20.982 clat (usec): min=173, max=332413, avg=6291.39, stdev=18466.54 00:33:20.982 lat (usec): min=175, max=332415, avg=6292.83, stdev=18466.56 00:33:20.982 clat percentiles (msec): 00:33:20.982 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:33:20.982 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:33:20.982 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:33:20.982 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:33:20.982 | 99.99th=[ 334] 00:33:20.982 bw ( KiB/s): min=15248, max=48912, per=99.98%, avg=40392.00, stdev=16763.32, samples=4 00:33:20.982 iops : min= 3812, max=12228, avg=10098.00, stdev=4190.83, samples=4 00:33:20.982 write: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(79.2MiB/2005msec); 0 zone resets 00:33:20.982 slat (nsec): min=1381, max=17719, avg=1565.67, stdev=356.77 00:33:20.982 clat (usec): min=153, max=332715, avg=6257.22, stdev=17948.14 00:33:20.982 lat (usec): min=154, max=332719, avg=6258.78, stdev=17948.19 00:33:20.982 clat percentiles (msec): 00:33:20.982 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:33:20.982 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:33:20.982 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:33:20.982 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 334], 99.95th=[ 334], 00:33:20.982 | 99.99th=[ 334] 00:33:20.982 bw ( KiB/s): min=15968, max=48584, per=99.89%, avg=40406.00, stdev=16292.05, samples=4 00:33:20.982 iops : min= 3992, max=12146, avg=10101.50, stdev=4073.01, samples=4 00:33:20.982 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:33:20.982 lat (msec) : 2=0.03%, 4=0.30%, 10=99.30%, 500=0.32% 00:33:20.982 cpu : usr=99.60%, sys=0.05%, ctx=16, majf=0, minf=13 00:33:20.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:20.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:20.982 issued rwts: total=20250,20276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.982 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:20.982 00:33:20.982 Run status group 0 (all jobs): 00:33:20.982 READ: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=79.1MiB (82.9MB), run=2005-2005msec 00:33:20.982 WRITE: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=79.2MiB (83.1MB), run=2005-2005msec 00:33:20.982 20:54:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:20.982 20:54:09 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a7720898-160d-43fe-85f7-3aa5e71a08a7 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a7720898-160d-43fe-85f7-3aa5e71a08a7 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a7720898-160d-43fe-85f7-3aa5e71a08a7 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:22.349 { 00:33:22.349 "uuid": "c29dc882-d7cd-41a7-a15e-e4a7ca213e0b", 00:33:22.349 "name": "lvs_0", 00:33:22.349 "base_bdev": "Nvme0n1", 00:33:22.349 "total_data_clusters": 1862, 00:33:22.349 "free_clusters": 0, 00:33:22.349 "block_size": 512, 00:33:22.349 "cluster_size": 1073741824 00:33:22.349 }, 00:33:22.349 { 00:33:22.349 "uuid": "a7720898-160d-43fe-85f7-3aa5e71a08a7", 00:33:22.349 "name": "lvs_n_0", 00:33:22.349 "base_bdev": "a1e03957-29ab-4614-91dd-0a45be3d6b0f", 00:33:22.349 "total_data_clusters": 476206, 00:33:22.349 "free_clusters": 476206, 00:33:22.349 "block_size": 512, 00:33:22.349 "cluster_size": 4194304 00:33:22.349 } 00:33:22.349 ]' 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a7720898-160d-43fe-85f7-3aa5e71a08a7") .free_clusters' 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=476206 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a7720898-160d-43fe-85f7-3aa5e71a08a7") .cluster_size' 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1904824 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1904824 00:33:22.349 1904824 00:33:22.349 20:54:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:33:23.279 ffeaa2d0-12f8-4904-8736-61e521774636 00:33:23.279 20:54:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:23.534 20:54:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:23.534 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:23.789 20:54:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:24.045 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:24.045 fio-3.35 00:33:24.045 Starting 1 thread 00:33:24.301 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.861 00:33:26.861 test: (groupid=0, jobs=1): err= 0: pid=1301202: Fri Jul 26 20:54:15 2024 00:33:26.861 read: IOPS=10.2k, BW=39.8MiB/s (41.8MB/s)(79.9MiB/2006msec) 00:33:26.861 slat (nsec): min=1335, max=21057, avg=1458.57, stdev=314.17 00:33:26.861 clat (usec): min=3251, max=10726, avg=6200.87, stdev=206.35 00:33:26.861 lat (usec): min=3254, max=10728, avg=6202.33, stdev=206.33 00:33:26.861 clat percentiles (usec): 00:33:26.861 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6128], 20.00th=[ 6194], 00:33:26.861 | 30.00th=[ 6194], 40.00th=[ 6194], 50.00th=[ 6194], 60.00th=[ 6194], 00:33:26.861 | 70.00th=[ 6194], 80.00th=[ 6259], 90.00th=[ 6259], 95.00th=[ 6259], 00:33:26.861 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 9110], 99.95th=[10028], 00:33:26.861 | 99.99th=[10683] 00:33:26.861 bw ( KiB/s): min=39352, max=41456, per=99.94%, avg=40776.00, stdev=978.25, samples=4 00:33:26.861 iops : min= 9838, max=10364, avg=10194.00, stdev=244.56, samples=4 00:33:26.861 write: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(79.9MiB/2006msec); 0 zone resets 00:33:26.861 slat (nsec): min=1372, max=17742, avg=1547.77, stdev=303.60 00:33:26.861 clat (usec): min=3259, max=10740, avg=6219.72, stdev=198.60 00:33:26.861 lat (usec): min=3262, max=10741, avg=6221.27, stdev=198.57 00:33:26.861 clat percentiles (usec): 00:33:26.861 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6194], 20.00th=[ 6194], 00:33:26.861 | 30.00th=[ 6194], 40.00th=[ 6194], 50.00th=[ 6194], 60.00th=[ 6259], 00:33:26.861 | 70.00th=[ 6259], 80.00th=[ 6259], 90.00th=[ 6259], 95.00th=[ 6259], 00:33:26.861 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 9110], 99.95th=[10028], 00:33:26.861 | 99.99th=[10683] 00:33:26.861 bw ( KiB/s): min=39760, max=41376, per=100.00%, avg=40826.00, stdev=730.04, samples=4 00:33:26.861 iops : min= 9940, max=10344, avg=10206.50, stdev=182.51, samples=4 00:33:26.861 lat (msec) : 4=0.05%, 10=99.89%, 20=0.06% 00:33:26.861 cpu : usr=99.60%, sys=0.05%, ctx=16, majf=0, minf=13 00:33:26.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:26.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:26.861 issued rwts: total=20461,20467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:26.861 00:33:26.861 Run status group 0 (all jobs): 00:33:26.861 READ: bw=39.8MiB/s (41.8MB/s), 39.8MiB/s-39.8MiB/s (41.8MB/s-41.8MB/s), io=79.9MiB (83.8MB), run=2006-2006msec 00:33:26.861 WRITE: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=79.9MiB (83.8MB), run=2006-2006msec 00:33:26.861 20:54:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:26.861 20:54:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:26.861 20:54:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:34.950 20:54:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:34.950 20:54:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:40.191 20:54:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:40.191 20:54:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:43.462 rmmod nvme_rdma 00:33:43.462 rmmod nvme_fabrics 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1296136 ']' 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1296136 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1296136 ']' 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1296136 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1296136 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1296136' 00:33:43.462 killing process with pid 1296136 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1296136 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1296136 00:33:43.462 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:43.463 00:33:43.463 real 0m51.486s 00:33:43.463 user 3m37.879s 00:33:43.463 sys 0m8.898s 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.463 ************************************ 00:33:43.463 END TEST nvmf_fio_host 00:33:43.463 ************************************ 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.463 ************************************ 00:33:43.463 START TEST nvmf_failover 00:33:43.463 ************************************ 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:33:43.463 * Looking for test storage... 00:33:43.463 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.463 20:54:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.463 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:43.721 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:43.722 20:54:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:51.831 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:51.831 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:51.831 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:51.832 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:51.832 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:51.832 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:51.832 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:51.832 altname enp217s0f0np0 00:33:51.832 altname ens818f0np0 00:33:51.832 inet 192.168.100.8/24 scope global mlx_0_0 00:33:51.832 valid_lft forever preferred_lft forever 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:51.832 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:51.832 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:51.832 altname enp217s0f1np1 00:33:51.832 altname ens818f1np1 00:33:51.832 inet 192.168.100.9/24 scope global mlx_0_1 00:33:51.832 valid_lft forever preferred_lft forever 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:51.832 20:54:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:51.832 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:51.832 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:51.832 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.832 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:51.832 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:51.832 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:33:51.832 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:51.833 192.168.100.9' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:51.833 192.168.100.9' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:51.833 192.168.100.9' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1308150 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1308150 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1308150 ']' 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:51.833 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:51.833 [2024-07-26 20:54:40.125794] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:33:51.833 [2024-07-26 20:54:40.125847] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.833 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.833 [2024-07-26 20:54:40.213368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:51.833 [2024-07-26 20:54:40.252263] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.833 [2024-07-26 20:54:40.252300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.833 [2024-07-26 20:54:40.252309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.833 [2024-07-26 20:54:40.252318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.833 [2024-07-26 20:54:40.252325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.833 [2024-07-26 20:54:40.252424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:51.833 [2024-07-26 20:54:40.252508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:51.833 [2024-07-26 20:54:40.252510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.396 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:52.397 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:52.397 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:52.397 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:52.397 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:52.654 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.654 20:54:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:52.654 [2024-07-26 20:54:41.157378] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1924520/0x1928a10) succeed. 00:33:52.654 [2024-07-26 20:54:41.166599] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1925ac0/0x196a0a0) succeed. 00:33:52.911 20:54:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:52.911 Malloc0 00:33:53.168 20:54:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:53.168 20:54:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:53.448 20:54:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:53.448 [2024-07-26 20:54:41.977640] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:53.717 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:53.717 [2024-07-26 20:54:42.166057] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:53.717 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:33:53.974 [2024-07-26 20:54:42.354744] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:33:53.974 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1308671 00:33:53.974 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:53.974 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:53.975 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1308671 /var/tmp/bdevperf.sock 00:33:53.975 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1308671 ']' 00:33:53.975 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:53.975 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:53.975 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:53.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:53.975 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:53.975 20:54:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:54.905 20:54:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:54.905 20:54:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:54.905 20:54:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:55.161 NVMe0n1 00:33:55.161 20:54:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:55.418 00:33:55.418 20:54:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:55.418 20:54:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1308831 00:33:55.418 20:54:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:56.347 20:54:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:56.603 20:54:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:59.880 20:54:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:59.880 00:33:59.880 20:54:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:59.880 20:54:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:03.154 20:54:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:03.154 [2024-07-26 20:54:51.515291] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:03.154 20:54:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:04.084 20:54:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:04.340 20:54:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1308831 00:34:10.902 0 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1308671 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1308671 ']' 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1308671 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1308671 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1308671' 00:34:10.902 killing process with pid 1308671 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1308671 00:34:10.902 20:54:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1308671 00:34:10.902 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:10.902 [2024-07-26 20:54:42.430197] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:34:10.902 [2024-07-26 20:54:42.430258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308671 ] 00:34:10.902 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.902 [2024-07-26 20:54:42.516059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.902 [2024-07-26 20:54:42.555188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.902 Running I/O for 15 seconds... 00:34:10.902 [2024-07-26 20:54:45.916183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.902 [2024-07-26 20:54:45.916639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.902 [2024-07-26 20:54:45.916650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.916985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.916996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.903 [2024-07-26 20:54:45.917379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.903 [2024-07-26 20:54:45.917389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.917986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.917995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.918006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.918015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.918025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.918034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.918045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.918054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.918065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.904 [2024-07-26 20:54:45.918074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.904 [2024-07-26 20:54:45.918084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.905 [2024-07-26 20:54:45.918739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184400 00:34:10.905 [2024-07-26 20:54:45.918760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184400 00:34:10.905 [2024-07-26 20:54:45.918780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.918792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184400 00:34:10.905 [2024-07-26 20:54:45.918801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.920675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.905 [2024-07-26 20:54:45.920690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.905 [2024-07-26 20:54:45.920699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26648 len:8 PRP1 0x0 PRP2 0x0 00:34:10.905 [2024-07-26 20:54:45.920710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.905 [2024-07-26 20:54:45.920752] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:34:10.906 [2024-07-26 20:54:45.920765] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:34:10.906 [2024-07-26 20:54:45.920775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.906 [2024-07-26 20:54:45.923477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.906 [2024-07-26 20:54:45.937979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:10.906 [2024-07-26 20:54:45.986553] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:10.906 [2024-07-26 20:54:49.340586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.906 [2024-07-26 20:54:49.340632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:3eff200 sqhd:d290 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.340645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.906 [2024-07-26 20:54:49.340654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:3eff200 sqhd:d290 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.340664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.906 [2024-07-26 20:54:49.340673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:3eff200 sqhd:d290 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.340687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:10.906 [2024-07-26 20:54:49.340696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:3eff200 sqhd:d290 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:10.906 [2024-07-26 20:54:49.342457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.906 [2024-07-26 20:54:49.342469] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:34:10.906 [2024-07-26 20:54:49.342480] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:10.906 [2024-07-26 20:54:49.342498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184400 00:34:10.906 [2024-07-26 20:54:49.342508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184400 00:34:10.906 [2024-07-26 20:54:49.342566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184400 00:34:10.906 [2024-07-26 20:54:49.342610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.342658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.342700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.342741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.342784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.342825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.342865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.342909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.342951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.342982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184400 00:34:10.906 [2024-07-26 20:54:49.342992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184400 00:34:10.906 [2024-07-26 20:54:49.343034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.906 [2024-07-26 20:54:49.343570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.906 [2024-07-26 20:54:49.343600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.343610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.343654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.343694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.343735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.343776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.343817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.343858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.343901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.343942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.343972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.343982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.907 [2024-07-26 20:54:49.344683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184400 00:34:10.907 [2024-07-26 20:54:49.344932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.907 [2024-07-26 20:54:49.344964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.344973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.345677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.345963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.345993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.908 [2024-07-26 20:54:49.346329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.346374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.346415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.346457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.908 [2024-07-26 20:54:49.346488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184400 00:34:10.908 [2024-07-26 20:54:49.346498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.346540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.346581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.346622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.346668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.346710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.346751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.346793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.346835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.346876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.346917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.346957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.346988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.346998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.347040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.347081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.347122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.347163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.347204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.347246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.347288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184400 00:34:10.909 [2024-07-26 20:54:49.347332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.347739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:49.347750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.362439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.909 [2024-07-26 20:54:49.362460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.909 [2024-07-26 20:54:49.362470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119128 len:8 PRP1 0x0 PRP2 0x0 00:34:10.909 [2024-07-26 20:54:49.362483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:49.362549] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:34:10.909 [2024-07-26 20:54:49.362560] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:10.909 [2024-07-26 20:54:49.362588] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:10.909 [2024-07-26 20:54:49.365279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.909 [2024-07-26 20:54:49.410374] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:10.909 [2024-07-26 20:54:53.708227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:53.708268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:53.708285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:53.708295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:53.708306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:53.708315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.909 [2024-07-26 20:54:53.708326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.909 [2024-07-26 20:54:53.708335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184400 00:34:10.910 [2024-07-26 20:54:53.708963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.708983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.708994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.709004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.709014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.709024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.709034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.709043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.709054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.709063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.709074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.709083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.709094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.709102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.910 [2024-07-26 20:54:53.709114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.910 [2024-07-26 20:54:53.709124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.911 [2024-07-26 20:54:53.709607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.911 [2024-07-26 20:54:53.709766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184400 00:34:10.911 [2024-07-26 20:54:53.709775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.709794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.709814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.709834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.709853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.709874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.709893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.709913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.709935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.709958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.709979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.709990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.709999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184400 00:34:10.912 [2024-07-26 20:54:53.710424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.912 [2024-07-26 20:54:53.710525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.912 [2024-07-26 20:54:53.710536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.913 [2024-07-26 20:54:53.710545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.913 [2024-07-26 20:54:53.710564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:10.913 [2024-07-26 20:54:53.710585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.710848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184400 00:34:10.913 [2024-07-26 20:54:53.710857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b010b000 sqhd:52b0 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.712750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:10.913 [2024-07-26 20:54:53.712765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:10.913 [2024-07-26 20:54:53.712774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89848 len:8 PRP1 0x0 PRP2 0x0 00:34:10.913 [2024-07-26 20:54:53.712785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:10.913 [2024-07-26 20:54:53.712827] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:34:10.913 [2024-07-26 20:54:53.712839] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:34:10.913 [2024-07-26 20:54:53.712849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.913 [2024-07-26 20:54:53.715556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.913 [2024-07-26 20:54:53.729492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:10.913 [2024-07-26 20:54:53.771269] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:10.913 00:34:10.913 Latency(us) 00:34:10.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.913 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:10.913 Verification LBA range: start 0x0 length 0x4000 00:34:10.913 NVMe0n1 : 15.00 14435.46 56.39 329.76 0.00 8647.53 322.76 1033476.51 00:34:10.913 =================================================================================================================== 00:34:10.913 Total : 14435.46 56.39 329.76 0.00 8647.53 322.76 1033476.51 00:34:10.913 Received shutdown signal, test time was about 15.000000 seconds 00:34:10.913 00:34:10.913 Latency(us) 00:34:10.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.913 =================================================================================================================== 00:34:10.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1311372 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1311372 /var/tmp/bdevperf.sock 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1311372 ']' 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:10.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:10.913 20:54:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:11.475 20:55:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:11.475 20:55:00 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:11.475 20:55:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:11.731 [2024-07-26 20:55:00.165797] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:11.731 20:55:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:11.987 [2024-07-26 20:55:00.346397] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:34:11.987 20:55:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.243 NVMe0n1 00:34:12.243 20:55:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.500 00:34:12.500 20:55:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.756 00:34:12.756 20:55:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:12.756 20:55:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:12.756 20:55:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:13.012 20:55:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:16.310 20:55:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:16.310 20:55:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:16.310 20:55:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:16.310 20:55:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1312191 00:34:16.310 20:55:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1312191 00:34:17.240 0 00:34:17.240 20:55:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:17.240 [2024-07-26 20:54:59.186647] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:34:17.240 [2024-07-26 20:54:59.186708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311372 ] 00:34:17.240 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.240 [2024-07-26 20:54:59.273676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.240 [2024-07-26 20:54:59.309151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.240 [2024-07-26 20:55:01.436439] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:34:17.240 [2024-07-26 20:55:01.437073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.240 [2024-07-26 20:55:01.437104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.240 [2024-07-26 20:55:01.461456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:17.240 [2024-07-26 20:55:01.477677] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:17.240 Running I/O for 1 seconds... 00:34:17.240 00:34:17.240 Latency(us) 00:34:17.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.240 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:17.240 Verification LBA range: start 0x0 length 0x4000 00:34:17.240 NVMe0n1 : 1.00 18229.73 71.21 0.00 0.00 6983.55 2621.44 11324.62 00:34:17.240 =================================================================================================================== 00:34:17.240 Total : 18229.73 71.21 0.00 0.00 6983.55 2621.44 11324.62 00:34:17.240 20:55:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:17.240 20:55:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:17.497 20:55:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:17.754 20:55:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:17.754 20:55:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:18.011 20:55:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:18.011 20:55:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1311372 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1311372 ']' 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1311372 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1311372 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1311372' 00:34:21.284 killing process with pid 1311372 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1311372 00:34:21.284 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1311372 00:34:21.540 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:21.540 20:55:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:21.797 rmmod nvme_rdma 00:34:21.797 rmmod nvme_fabrics 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1308150 ']' 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1308150 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1308150 ']' 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1308150 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1308150 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1308150' 00:34:21.797 killing process with pid 1308150 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1308150 00:34:21.797 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1308150 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:22.054 00:34:22.054 real 0m38.616s 00:34:22.054 user 2m3.876s 00:34:22.054 sys 0m8.695s 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:22.054 ************************************ 00:34:22.054 END TEST nvmf_failover 00:34:22.054 ************************************ 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.054 ************************************ 00:34:22.054 START TEST nvmf_host_discovery 00:34:22.054 ************************************ 00:34:22.054 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:34:22.312 * Looking for test storage... 00:34:22.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:34:22.312 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:34:22.312 00:34:22.312 real 0m0.138s 00:34:22.312 user 0m0.053s 00:34:22.312 sys 0m0.094s 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.312 ************************************ 00:34:22.312 END TEST nvmf_host_discovery 00:34:22.312 ************************************ 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.312 ************************************ 00:34:22.312 START TEST nvmf_host_multipath_status 00:34:22.312 ************************************ 00:34:22.312 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:34:22.571 * Looking for test storage... 00:34:22.571 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:22.571 20:55:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:30.678 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:30.678 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:30.678 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.678 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:30.679 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:30.679 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:30.679 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:30.679 altname enp217s0f0np0 00:34:30.679 altname ens818f0np0 00:34:30.679 inet 192.168.100.8/24 scope global mlx_0_0 00:34:30.679 valid_lft forever preferred_lft forever 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:30.679 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:30.679 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:30.679 altname enp217s0f1np1 00:34:30.679 altname ens818f1np1 00:34:30.679 inet 192.168.100.9/24 scope global mlx_0_1 00:34:30.679 valid_lft forever preferred_lft forever 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:30.679 192.168.100.9' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:30.679 192.168.100.9' 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:34:30.679 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:30.680 192.168.100.9' 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1317206 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1317206 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1317206 ']' 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:30.680 20:55:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:30.680 [2024-07-26 20:55:18.941071] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:34:30.680 [2024-07-26 20:55:18.941122] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.680 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.680 [2024-07-26 20:55:19.027022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:30.680 [2024-07-26 20:55:19.065897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.680 [2024-07-26 20:55:19.065934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.680 [2024-07-26 20:55:19.065944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.680 [2024-07-26 20:55:19.065953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.680 [2024-07-26 20:55:19.065960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.680 [2024-07-26 20:55:19.066009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.680 [2024-07-26 20:55:19.066017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.245 20:55:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:31.245 20:55:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:31.245 20:55:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:31.245 20:55:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:31.245 20:55:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:31.245 20:55:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.245 20:55:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1317206 00:34:31.245 20:55:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:31.502 [2024-07-26 20:55:19.949080] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x142a6e0/0x142ebd0) succeed. 00:34:31.502 [2024-07-26 20:55:19.957973] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x142bbe0/0x1470260) succeed. 00:34:31.502 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:31.759 Malloc0 00:34:31.759 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:32.016 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:32.273 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:32.273 [2024-07-26 20:55:20.726944] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:32.273 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:32.530 [2024-07-26 20:55:20.903142] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:32.530 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1317501 00:34:32.530 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:32.530 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1317501 /var/tmp/bdevperf.sock 00:34:32.531 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1317501 ']' 00:34:32.531 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:32.531 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:32.531 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:32.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:32.531 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:32.531 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:32.531 20:55:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:33.460 20:55:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:33.460 20:55:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:33.460 20:55:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:33.460 20:55:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:33.717 Nvme0n1 00:34:33.717 20:55:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:33.974 Nvme0n1 00:34:33.974 20:55:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:33.974 20:55:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:35.920 20:55:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:35.920 20:55:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:34:36.178 20:55:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:36.435 20:55:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:37.367 20:55:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:37.367 20:55:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:37.367 20:55:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.367 20:55:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:37.625 20:55:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.625 20:55:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:37.625 20:55:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.625 20:55:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:37.625 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.626 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:37.626 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.884 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:37.884 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.884 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:37.884 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.884 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:38.143 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.143 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:38.143 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.143 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:38.401 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.401 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:38.401 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.401 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:38.401 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.401 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:38.401 20:55:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:38.658 20:55:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:38.917 20:55:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:39.852 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:39.852 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:39.852 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.852 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:40.111 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.111 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:40.111 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.111 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:40.111 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.111 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:40.111 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.111 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:40.369 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.369 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:40.369 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.369 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:40.628 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.628 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:40.628 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.628 20:55:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:40.628 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.628 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:40.628 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.628 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:40.886 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.886 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:40.886 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:41.145 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:34:41.146 20:55:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:42.521 20:55:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:42.521 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:42.521 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.780 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.780 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:42.780 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.780 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:43.038 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.038 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:43.038 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.038 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:43.038 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.038 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:43.038 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.038 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:43.296 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.296 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:43.296 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:43.554 20:55:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:43.554 20:55:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.928 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:45.186 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.186 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:45.186 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.186 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:45.443 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.443 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:45.443 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.443 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:45.443 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.443 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:45.444 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.444 20:55:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:45.701 20:55:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:45.701 20:55:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:45.701 20:55:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:34:45.958 20:55:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:45.959 20:55:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.333 20:55:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:47.591 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.591 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:47.591 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.591 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:47.847 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.847 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:47.847 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.847 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:47.847 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:47.847 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:47.847 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.847 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.104 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.105 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:48.105 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:34:48.362 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:48.619 20:55:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:49.606 20:55:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:49.606 20:55:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:49.606 20:55:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.606 20:55:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:49.606 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:49.606 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:49.606 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:49.606 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.864 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.864 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:49.864 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.864 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:50.122 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.122 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:50.122 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:50.122 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.122 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.122 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:50.122 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.122 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:50.380 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.380 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:50.380 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.380 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:50.638 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.638 20:55:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:50.638 20:55:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:50.639 20:55:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:34:50.897 20:55:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:51.155 20:55:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:52.089 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:52.089 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:52.089 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.089 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.348 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.348 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:52.348 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.348 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:52.348 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.348 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.348 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.348 20:55:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:52.606 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.606 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:52.606 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.606 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:52.864 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.864 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:52.864 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.864 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:52.864 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.864 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:52.864 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.864 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.122 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.122 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:53.122 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:53.380 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:53.638 20:55:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:54.573 20:55:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:54.573 20:55:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:54.573 20:55:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.573 20:55:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:54.831 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:54.831 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:54.831 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.831 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:54.831 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.831 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:54.831 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.831 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.088 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.088 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.088 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.088 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:55.345 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.345 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:55.345 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.345 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.345 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.345 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:55.345 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.345 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:55.603 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.603 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:55.603 20:55:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:55.603 20:55:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:34:55.861 20:55:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:56.796 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:56.796 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:56.796 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.796 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:57.055 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.055 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:57.055 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.055 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.314 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.314 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.314 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:57.314 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.573 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.573 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:57.573 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.573 20:55:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:57.573 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.573 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:57.573 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.574 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:57.834 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.834 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:57.834 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:57.834 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.093 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.093 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:58.093 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:58.093 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:58.353 20:55:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:59.289 20:55:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:59.289 20:55:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:59.289 20:55:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.289 20:55:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:59.549 20:55:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.549 20:55:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:59.549 20:55:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.549 20:55:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:59.808 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:59.808 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:59.808 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.808 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:59.808 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.808 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:59.808 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.809 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.068 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.068 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:00.068 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.068 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:00.327 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1317501 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1317501 ']' 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1317501 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:00.328 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1317501 00:35:00.591 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:35:00.591 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:35:00.591 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1317501' 00:35:00.591 killing process with pid 1317501 00:35:00.591 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1317501 00:35:00.591 20:55:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1317501 00:35:00.591 Connection closed with partial response: 00:35:00.591 00:35:00.591 00:35:00.591 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1317501 00:35:00.591 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:00.591 [2024-07-26 20:55:20.963156] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:35:00.591 [2024-07-26 20:55:20.963216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317501 ] 00:35:00.591 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.591 [2024-07-26 20:55:21.043862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.591 [2024-07-26 20:55:21.081937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:00.591 Running I/O for 90 seconds... 00:35:00.591 [2024-07-26 20:55:34.318660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.318698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.318833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.318856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.318877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.318898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.318919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.318940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.318961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.318982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.318995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.591 [2024-07-26 20:55:34.319119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:00.591 [2024-07-26 20:55:34.319255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183200 00:35:00.591 [2024-07-26 20:55:34.319266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.319813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.319835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183200 00:35:00.592 [2024-07-26 20:55:34.319877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.319899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.319919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.319941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.319962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.319982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.319994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.320003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.320017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.320026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.320038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.592 [2024-07-26 20:55:34.320047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:00.592 [2024-07-26 20:55:34.320058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.593 [2024-07-26 20:55:34.320068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.593 [2024-07-26 20:55:34.320088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.593 [2024-07-26 20:55:34.320108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.593 [2024-07-26 20:55:34.320129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.593 [2024-07-26 20:55:34.320150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.593 [2024-07-26 20:55:34.320170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:00.593 [2024-07-26 20:55:34.320832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183200 00:35:00.593 [2024-07-26 20:55:34.320842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.320854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.320863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.320875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.320884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.320896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.320906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:34.321795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:34.321926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:34.321936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:46.759099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183200 00:35:00.594 [2024-07-26 20:55:46.759136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:46.759740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:46.759753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:46.759767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:46.759776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:00.594 [2024-07-26 20:55:46.759788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.594 [2024-07-26 20:55:46.759797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.759819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.759850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.759870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.759890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.759911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.759936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.759957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.759977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.759990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.759999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.595 [2024-07-26 20:55:46.760393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183200 00:35:00.595 [2024-07-26 20:55:46.760415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:00.595 [2024-07-26 20:55:46.760481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.760848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.760983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.760992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.761004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.761013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.761024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.761033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.761044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.596 [2024-07-26 20:55:46.761054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.761065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.761075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.761086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.761094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:00.596 [2024-07-26 20:55:46.761106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183200 00:35:00.596 [2024-07-26 20:55:46.761116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:00.596 Received shutdown signal, test time was about 26.307702 seconds 00:35:00.596 00:35:00.596 Latency(us) 00:35:00.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.596 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:00.596 Verification LBA range: start 0x0 length 0x4000 00:35:00.596 Nvme0n1 : 26.31 16003.76 62.51 0.00 0.00 7976.92 40.35 3019898.88 00:35:00.596 =================================================================================================================== 00:35:00.596 Total : 16003.76 62.51 0.00 0.00 7976.92 40.35 3019898.88 00:35:00.596 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:00.856 rmmod nvme_rdma 00:35:00.856 rmmod nvme_fabrics 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1317206 ']' 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1317206 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1317206 ']' 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1317206 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1317206 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1317206' 00:35:00.856 killing process with pid 1317206 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1317206 00:35:00.856 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1317206 00:35:01.116 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:01.116 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:01.116 00:35:01.116 real 0m38.822s 00:35:01.116 user 1m45.457s 00:35:01.116 sys 0m10.308s 00:35:01.116 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:01.116 20:55:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:01.116 ************************************ 00:35:01.116 END TEST nvmf_host_multipath_status 00:35:01.116 ************************************ 00:35:01.116 20:55:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:35:01.116 20:55:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:01.116 20:55:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:01.116 20:55:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.375 ************************************ 00:35:01.376 START TEST nvmf_discovery_remove_ifc 00:35:01.376 ************************************ 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:35:01.376 * Looking for test storage... 00:35:01.376 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:35:01.376 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:35:01.376 00:35:01.376 real 0m0.146s 00:35:01.376 user 0m0.063s 00:35:01.376 sys 0m0.093s 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.376 ************************************ 00:35:01.376 END TEST nvmf_discovery_remove_ifc 00:35:01.376 ************************************ 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.376 ************************************ 00:35:01.376 START TEST nvmf_identify_kernel_target 00:35:01.376 ************************************ 00:35:01.376 20:55:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:35:01.636 * Looking for test storage... 00:35:01.636 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:01.636 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.636 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:01.636 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.636 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.636 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.636 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.636 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.636 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:01.637 20:55:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:09.837 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:09.837 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:09.837 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:09.837 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:09.837 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:35:09.838 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:09.838 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:09.838 altname enp217s0f0np0 00:35:09.838 altname ens818f0np0 00:35:09.838 inet 192.168.100.8/24 scope global mlx_0_0 00:35:09.838 valid_lft forever preferred_lft forever 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:35:09.838 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:09.838 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:09.838 altname enp217s0f1np1 00:35:09.838 altname ens818f1np1 00:35:09.838 inet 192.168.100.9/24 scope global mlx_0_1 00:35:09.838 valid_lft forever preferred_lft forever 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:09.838 192.168.100.9' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:09.838 192.168.100.9' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:09.838 192.168.100.9' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.838 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:09.839 20:55:57 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:35:13.128 Waiting for block devices as requested 00:35:13.128 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:13.128 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:13.128 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:13.128 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:13.387 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:13.387 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:13.387 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:13.387 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:13.646 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:13.646 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:13.646 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:13.906 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:13.906 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:13.906 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:14.166 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:14.166 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:14.166 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:14.425 No valid GPT data, bailing 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:14.425 20:56:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:35:14.685 00:35:14.685 Discovery Log Number of Records 2, Generation counter 2 00:35:14.685 =====Discovery Log Entry 0====== 00:35:14.685 trtype: rdma 00:35:14.685 adrfam: ipv4 00:35:14.685 subtype: current discovery subsystem 00:35:14.685 treq: not specified, sq flow control disable supported 00:35:14.685 portid: 1 00:35:14.685 trsvcid: 4420 00:35:14.685 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:14.685 traddr: 192.168.100.8 00:35:14.685 eflags: none 00:35:14.685 rdma_prtype: not specified 00:35:14.685 rdma_qptype: connected 00:35:14.685 rdma_cms: rdma-cm 00:35:14.685 rdma_pkey: 0x0000 00:35:14.685 =====Discovery Log Entry 1====== 00:35:14.685 trtype: rdma 00:35:14.685 adrfam: ipv4 00:35:14.685 subtype: nvme subsystem 00:35:14.685 treq: not specified, sq flow control disable supported 00:35:14.685 portid: 1 00:35:14.685 trsvcid: 4420 00:35:14.685 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:14.685 traddr: 192.168.100.8 00:35:14.685 eflags: none 00:35:14.685 rdma_prtype: not specified 00:35:14.685 rdma_qptype: connected 00:35:14.685 rdma_cms: rdma-cm 00:35:14.685 rdma_pkey: 0x0000 00:35:14.685 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:35:14.685 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:14.685 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.685 ===================================================== 00:35:14.685 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:14.685 ===================================================== 00:35:14.685 Controller Capabilities/Features 00:35:14.685 ================================ 00:35:14.685 Vendor ID: 0000 00:35:14.685 Subsystem Vendor ID: 0000 00:35:14.685 Serial Number: c67c237ba678c014a674 00:35:14.685 Model Number: Linux 00:35:14.685 Firmware Version: 6.7.0-68 00:35:14.685 Recommended Arb Burst: 0 00:35:14.685 IEEE OUI Identifier: 00 00 00 00:35:14.685 Multi-path I/O 00:35:14.685 May have multiple subsystem ports: No 00:35:14.685 May have multiple controllers: No 00:35:14.685 Associated with SR-IOV VF: No 00:35:14.685 Max Data Transfer Size: Unlimited 00:35:14.685 Max Number of Namespaces: 0 00:35:14.685 Max Number of I/O Queues: 1024 00:35:14.685 NVMe Specification Version (VS): 1.3 00:35:14.685 NVMe Specification Version (Identify): 1.3 00:35:14.685 Maximum Queue Entries: 128 00:35:14.685 Contiguous Queues Required: No 00:35:14.685 Arbitration Mechanisms Supported 00:35:14.685 Weighted Round Robin: Not Supported 00:35:14.685 Vendor Specific: Not Supported 00:35:14.685 Reset Timeout: 7500 ms 00:35:14.685 Doorbell Stride: 4 bytes 00:35:14.685 NVM Subsystem Reset: Not Supported 00:35:14.685 Command Sets Supported 00:35:14.685 NVM Command Set: Supported 00:35:14.685 Boot Partition: Not Supported 00:35:14.685 Memory Page Size Minimum: 4096 bytes 00:35:14.685 Memory Page Size Maximum: 4096 bytes 00:35:14.685 Persistent Memory Region: Not Supported 00:35:14.685 Optional Asynchronous Events Supported 00:35:14.685 Namespace Attribute Notices: Not Supported 00:35:14.685 Firmware Activation Notices: Not Supported 00:35:14.685 ANA Change Notices: Not Supported 00:35:14.685 PLE Aggregate Log Change Notices: Not Supported 00:35:14.685 LBA Status Info Alert Notices: Not Supported 00:35:14.685 EGE Aggregate Log Change Notices: Not Supported 00:35:14.685 Normal NVM Subsystem Shutdown event: Not Supported 00:35:14.685 Zone Descriptor Change Notices: Not Supported 00:35:14.685 Discovery Log Change Notices: Supported 00:35:14.685 Controller Attributes 00:35:14.685 128-bit Host Identifier: Not Supported 00:35:14.685 Non-Operational Permissive Mode: Not Supported 00:35:14.685 NVM Sets: Not Supported 00:35:14.686 Read Recovery Levels: Not Supported 00:35:14.686 Endurance Groups: Not Supported 00:35:14.686 Predictable Latency Mode: Not Supported 00:35:14.686 Traffic Based Keep ALive: Not Supported 00:35:14.686 Namespace Granularity: Not Supported 00:35:14.686 SQ Associations: Not Supported 00:35:14.686 UUID List: Not Supported 00:35:14.686 Multi-Domain Subsystem: Not Supported 00:35:14.686 Fixed Capacity Management: Not Supported 00:35:14.686 Variable Capacity Management: Not Supported 00:35:14.686 Delete Endurance Group: Not Supported 00:35:14.686 Delete NVM Set: Not Supported 00:35:14.686 Extended LBA Formats Supported: Not Supported 00:35:14.686 Flexible Data Placement Supported: Not Supported 00:35:14.686 00:35:14.686 Controller Memory Buffer Support 00:35:14.686 ================================ 00:35:14.686 Supported: No 00:35:14.686 00:35:14.686 Persistent Memory Region Support 00:35:14.686 ================================ 00:35:14.686 Supported: No 00:35:14.686 00:35:14.686 Admin Command Set Attributes 00:35:14.686 ============================ 00:35:14.686 Security Send/Receive: Not Supported 00:35:14.686 Format NVM: Not Supported 00:35:14.686 Firmware Activate/Download: Not Supported 00:35:14.686 Namespace Management: Not Supported 00:35:14.686 Device Self-Test: Not Supported 00:35:14.686 Directives: Not Supported 00:35:14.686 NVMe-MI: Not Supported 00:35:14.686 Virtualization Management: Not Supported 00:35:14.686 Doorbell Buffer Config: Not Supported 00:35:14.686 Get LBA Status Capability: Not Supported 00:35:14.686 Command & Feature Lockdown Capability: Not Supported 00:35:14.686 Abort Command Limit: 1 00:35:14.686 Async Event Request Limit: 1 00:35:14.686 Number of Firmware Slots: N/A 00:35:14.686 Firmware Slot 1 Read-Only: N/A 00:35:14.686 Firmware Activation Without Reset: N/A 00:35:14.686 Multiple Update Detection Support: N/A 00:35:14.686 Firmware Update Granularity: No Information Provided 00:35:14.686 Per-Namespace SMART Log: No 00:35:14.686 Asymmetric Namespace Access Log Page: Not Supported 00:35:14.686 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:14.686 Command Effects Log Page: Not Supported 00:35:14.686 Get Log Page Extended Data: Supported 00:35:14.686 Telemetry Log Pages: Not Supported 00:35:14.686 Persistent Event Log Pages: Not Supported 00:35:14.686 Supported Log Pages Log Page: May Support 00:35:14.686 Commands Supported & Effects Log Page: Not Supported 00:35:14.686 Feature Identifiers & Effects Log Page:May Support 00:35:14.686 NVMe-MI Commands & Effects Log Page: May Support 00:35:14.686 Data Area 4 for Telemetry Log: Not Supported 00:35:14.686 Error Log Page Entries Supported: 1 00:35:14.686 Keep Alive: Not Supported 00:35:14.686 00:35:14.686 NVM Command Set Attributes 00:35:14.686 ========================== 00:35:14.686 Submission Queue Entry Size 00:35:14.686 Max: 1 00:35:14.686 Min: 1 00:35:14.686 Completion Queue Entry Size 00:35:14.686 Max: 1 00:35:14.686 Min: 1 00:35:14.686 Number of Namespaces: 0 00:35:14.686 Compare Command: Not Supported 00:35:14.686 Write Uncorrectable Command: Not Supported 00:35:14.686 Dataset Management Command: Not Supported 00:35:14.686 Write Zeroes Command: Not Supported 00:35:14.686 Set Features Save Field: Not Supported 00:35:14.686 Reservations: Not Supported 00:35:14.686 Timestamp: Not Supported 00:35:14.686 Copy: Not Supported 00:35:14.686 Volatile Write Cache: Not Present 00:35:14.686 Atomic Write Unit (Normal): 1 00:35:14.686 Atomic Write Unit (PFail): 1 00:35:14.686 Atomic Compare & Write Unit: 1 00:35:14.686 Fused Compare & Write: Not Supported 00:35:14.686 Scatter-Gather List 00:35:14.686 SGL Command Set: Supported 00:35:14.686 SGL Keyed: Supported 00:35:14.686 SGL Bit Bucket Descriptor: Not Supported 00:35:14.686 SGL Metadata Pointer: Not Supported 00:35:14.686 Oversized SGL: Not Supported 00:35:14.686 SGL Metadata Address: Not Supported 00:35:14.686 SGL Offset: Supported 00:35:14.686 Transport SGL Data Block: Not Supported 00:35:14.686 Replay Protected Memory Block: Not Supported 00:35:14.686 00:35:14.686 Firmware Slot Information 00:35:14.686 ========================= 00:35:14.686 Active slot: 0 00:35:14.686 00:35:14.686 00:35:14.686 Error Log 00:35:14.686 ========= 00:35:14.686 00:35:14.686 Active Namespaces 00:35:14.686 ================= 00:35:14.686 Discovery Log Page 00:35:14.686 ================== 00:35:14.686 Generation Counter: 2 00:35:14.686 Number of Records: 2 00:35:14.686 Record Format: 0 00:35:14.686 00:35:14.686 Discovery Log Entry 0 00:35:14.686 ---------------------- 00:35:14.686 Transport Type: 1 (RDMA) 00:35:14.686 Address Family: 1 (IPv4) 00:35:14.686 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:14.686 Entry Flags: 00:35:14.686 Duplicate Returned Information: 0 00:35:14.686 Explicit Persistent Connection Support for Discovery: 0 00:35:14.686 Transport Requirements: 00:35:14.686 Secure Channel: Not Specified 00:35:14.686 Port ID: 1 (0x0001) 00:35:14.686 Controller ID: 65535 (0xffff) 00:35:14.686 Admin Max SQ Size: 32 00:35:14.686 Transport Service Identifier: 4420 00:35:14.686 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:14.686 Transport Address: 192.168.100.8 00:35:14.686 Transport Specific Address Subtype - RDMA 00:35:14.686 RDMA QP Service Type: 1 (Reliable Connected) 00:35:14.686 RDMA Provider Type: 1 (No provider specified) 00:35:14.686 RDMA CM Service: 1 (RDMA_CM) 00:35:14.686 Discovery Log Entry 1 00:35:14.686 ---------------------- 00:35:14.686 Transport Type: 1 (RDMA) 00:35:14.686 Address Family: 1 (IPv4) 00:35:14.686 Subsystem Type: 2 (NVM Subsystem) 00:35:14.686 Entry Flags: 00:35:14.686 Duplicate Returned Information: 0 00:35:14.686 Explicit Persistent Connection Support for Discovery: 0 00:35:14.686 Transport Requirements: 00:35:14.686 Secure Channel: Not Specified 00:35:14.686 Port ID: 1 (0x0001) 00:35:14.686 Controller ID: 65535 (0xffff) 00:35:14.686 Admin Max SQ Size: 32 00:35:14.686 Transport Service Identifier: 4420 00:35:14.686 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:14.686 Transport Address: 192.168.100.8 00:35:14.686 Transport Specific Address Subtype - RDMA 00:35:14.686 RDMA QP Service Type: 1 (Reliable Connected) 00:35:14.946 RDMA Provider Type: 1 (No provider specified) 00:35:14.946 RDMA CM Service: 1 (RDMA_CM) 00:35:14.946 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:14.946 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.946 get_feature(0x01) failed 00:35:14.946 get_feature(0x02) failed 00:35:14.946 get_feature(0x04) failed 00:35:14.946 ===================================================== 00:35:14.946 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:35:14.946 ===================================================== 00:35:14.946 Controller Capabilities/Features 00:35:14.946 ================================ 00:35:14.946 Vendor ID: 0000 00:35:14.946 Subsystem Vendor ID: 0000 00:35:14.946 Serial Number: 341607da55d6ba39b3ca 00:35:14.946 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:14.946 Firmware Version: 6.7.0-68 00:35:14.946 Recommended Arb Burst: 6 00:35:14.946 IEEE OUI Identifier: 00 00 00 00:35:14.946 Multi-path I/O 00:35:14.946 May have multiple subsystem ports: Yes 00:35:14.946 May have multiple controllers: Yes 00:35:14.946 Associated with SR-IOV VF: No 00:35:14.946 Max Data Transfer Size: 1048576 00:35:14.946 Max Number of Namespaces: 1024 00:35:14.946 Max Number of I/O Queues: 128 00:35:14.946 NVMe Specification Version (VS): 1.3 00:35:14.946 NVMe Specification Version (Identify): 1.3 00:35:14.946 Maximum Queue Entries: 128 00:35:14.946 Contiguous Queues Required: No 00:35:14.946 Arbitration Mechanisms Supported 00:35:14.946 Weighted Round Robin: Not Supported 00:35:14.946 Vendor Specific: Not Supported 00:35:14.946 Reset Timeout: 7500 ms 00:35:14.946 Doorbell Stride: 4 bytes 00:35:14.946 NVM Subsystem Reset: Not Supported 00:35:14.946 Command Sets Supported 00:35:14.946 NVM Command Set: Supported 00:35:14.946 Boot Partition: Not Supported 00:35:14.946 Memory Page Size Minimum: 4096 bytes 00:35:14.946 Memory Page Size Maximum: 4096 bytes 00:35:14.946 Persistent Memory Region: Not Supported 00:35:14.946 Optional Asynchronous Events Supported 00:35:14.946 Namespace Attribute Notices: Supported 00:35:14.946 Firmware Activation Notices: Not Supported 00:35:14.946 ANA Change Notices: Supported 00:35:14.946 PLE Aggregate Log Change Notices: Not Supported 00:35:14.946 LBA Status Info Alert Notices: Not Supported 00:35:14.946 EGE Aggregate Log Change Notices: Not Supported 00:35:14.946 Normal NVM Subsystem Shutdown event: Not Supported 00:35:14.946 Zone Descriptor Change Notices: Not Supported 00:35:14.946 Discovery Log Change Notices: Not Supported 00:35:14.946 Controller Attributes 00:35:14.946 128-bit Host Identifier: Supported 00:35:14.946 Non-Operational Permissive Mode: Not Supported 00:35:14.946 NVM Sets: Not Supported 00:35:14.946 Read Recovery Levels: Not Supported 00:35:14.946 Endurance Groups: Not Supported 00:35:14.946 Predictable Latency Mode: Not Supported 00:35:14.946 Traffic Based Keep ALive: Supported 00:35:14.946 Namespace Granularity: Not Supported 00:35:14.946 SQ Associations: Not Supported 00:35:14.946 UUID List: Not Supported 00:35:14.946 Multi-Domain Subsystem: Not Supported 00:35:14.946 Fixed Capacity Management: Not Supported 00:35:14.946 Variable Capacity Management: Not Supported 00:35:14.946 Delete Endurance Group: Not Supported 00:35:14.946 Delete NVM Set: Not Supported 00:35:14.946 Extended LBA Formats Supported: Not Supported 00:35:14.946 Flexible Data Placement Supported: Not Supported 00:35:14.946 00:35:14.946 Controller Memory Buffer Support 00:35:14.946 ================================ 00:35:14.946 Supported: No 00:35:14.946 00:35:14.946 Persistent Memory Region Support 00:35:14.946 ================================ 00:35:14.946 Supported: No 00:35:14.946 00:35:14.946 Admin Command Set Attributes 00:35:14.946 ============================ 00:35:14.946 Security Send/Receive: Not Supported 00:35:14.946 Format NVM: Not Supported 00:35:14.946 Firmware Activate/Download: Not Supported 00:35:14.946 Namespace Management: Not Supported 00:35:14.946 Device Self-Test: Not Supported 00:35:14.946 Directives: Not Supported 00:35:14.946 NVMe-MI: Not Supported 00:35:14.946 Virtualization Management: Not Supported 00:35:14.946 Doorbell Buffer Config: Not Supported 00:35:14.946 Get LBA Status Capability: Not Supported 00:35:14.946 Command & Feature Lockdown Capability: Not Supported 00:35:14.946 Abort Command Limit: 4 00:35:14.946 Async Event Request Limit: 4 00:35:14.946 Number of Firmware Slots: N/A 00:35:14.946 Firmware Slot 1 Read-Only: N/A 00:35:14.946 Firmware Activation Without Reset: N/A 00:35:14.946 Multiple Update Detection Support: N/A 00:35:14.946 Firmware Update Granularity: No Information Provided 00:35:14.946 Per-Namespace SMART Log: Yes 00:35:14.946 Asymmetric Namespace Access Log Page: Supported 00:35:14.947 ANA Transition Time : 10 sec 00:35:14.947 00:35:14.947 Asymmetric Namespace Access Capabilities 00:35:14.947 ANA Optimized State : Supported 00:35:14.947 ANA Non-Optimized State : Supported 00:35:14.947 ANA Inaccessible State : Supported 00:35:14.947 ANA Persistent Loss State : Supported 00:35:14.947 ANA Change State : Supported 00:35:14.947 ANAGRPID is not changed : No 00:35:14.947 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:14.947 00:35:14.947 ANA Group Identifier Maximum : 128 00:35:14.947 Number of ANA Group Identifiers : 128 00:35:14.947 Max Number of Allowed Namespaces : 1024 00:35:14.947 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:14.947 Command Effects Log Page: Supported 00:35:14.947 Get Log Page Extended Data: Supported 00:35:14.947 Telemetry Log Pages: Not Supported 00:35:14.947 Persistent Event Log Pages: Not Supported 00:35:14.947 Supported Log Pages Log Page: May Support 00:35:14.947 Commands Supported & Effects Log Page: Not Supported 00:35:14.947 Feature Identifiers & Effects Log Page:May Support 00:35:14.947 NVMe-MI Commands & Effects Log Page: May Support 00:35:14.947 Data Area 4 for Telemetry Log: Not Supported 00:35:14.947 Error Log Page Entries Supported: 128 00:35:14.947 Keep Alive: Supported 00:35:14.947 Keep Alive Granularity: 1000 ms 00:35:14.947 00:35:14.947 NVM Command Set Attributes 00:35:14.947 ========================== 00:35:14.947 Submission Queue Entry Size 00:35:14.947 Max: 64 00:35:14.947 Min: 64 00:35:14.947 Completion Queue Entry Size 00:35:14.947 Max: 16 00:35:14.947 Min: 16 00:35:14.947 Number of Namespaces: 1024 00:35:14.947 Compare Command: Not Supported 00:35:14.947 Write Uncorrectable Command: Not Supported 00:35:14.947 Dataset Management Command: Supported 00:35:14.947 Write Zeroes Command: Supported 00:35:14.947 Set Features Save Field: Not Supported 00:35:14.947 Reservations: Not Supported 00:35:14.947 Timestamp: Not Supported 00:35:14.947 Copy: Not Supported 00:35:14.947 Volatile Write Cache: Present 00:35:14.947 Atomic Write Unit (Normal): 1 00:35:14.947 Atomic Write Unit (PFail): 1 00:35:14.947 Atomic Compare & Write Unit: 1 00:35:14.947 Fused Compare & Write: Not Supported 00:35:14.947 Scatter-Gather List 00:35:14.947 SGL Command Set: Supported 00:35:14.947 SGL Keyed: Supported 00:35:14.947 SGL Bit Bucket Descriptor: Not Supported 00:35:14.947 SGL Metadata Pointer: Not Supported 00:35:14.947 Oversized SGL: Not Supported 00:35:14.947 SGL Metadata Address: Not Supported 00:35:14.947 SGL Offset: Supported 00:35:14.947 Transport SGL Data Block: Not Supported 00:35:14.947 Replay Protected Memory Block: Not Supported 00:35:14.947 00:35:14.947 Firmware Slot Information 00:35:14.947 ========================= 00:35:14.947 Active slot: 0 00:35:14.947 00:35:14.947 Asymmetric Namespace Access 00:35:14.947 =========================== 00:35:14.947 Change Count : 0 00:35:14.947 Number of ANA Group Descriptors : 1 00:35:14.947 ANA Group Descriptor : 0 00:35:14.947 ANA Group ID : 1 00:35:14.947 Number of NSID Values : 1 00:35:14.947 Change Count : 0 00:35:14.947 ANA State : 1 00:35:14.947 Namespace Identifier : 1 00:35:14.947 00:35:14.947 Commands Supported and Effects 00:35:14.947 ============================== 00:35:14.947 Admin Commands 00:35:14.947 -------------- 00:35:14.947 Get Log Page (02h): Supported 00:35:14.947 Identify (06h): Supported 00:35:14.947 Abort (08h): Supported 00:35:14.947 Set Features (09h): Supported 00:35:14.947 Get Features (0Ah): Supported 00:35:14.947 Asynchronous Event Request (0Ch): Supported 00:35:14.947 Keep Alive (18h): Supported 00:35:14.947 I/O Commands 00:35:14.947 ------------ 00:35:14.947 Flush (00h): Supported 00:35:14.947 Write (01h): Supported LBA-Change 00:35:14.947 Read (02h): Supported 00:35:14.947 Write Zeroes (08h): Supported LBA-Change 00:35:14.947 Dataset Management (09h): Supported 00:35:14.947 00:35:14.947 Error Log 00:35:14.947 ========= 00:35:14.947 Entry: 0 00:35:14.947 Error Count: 0x3 00:35:14.947 Submission Queue Id: 0x0 00:35:14.947 Command Id: 0x5 00:35:14.947 Phase Bit: 0 00:35:14.947 Status Code: 0x2 00:35:14.947 Status Code Type: 0x0 00:35:14.947 Do Not Retry: 1 00:35:14.947 Error Location: 0x28 00:35:14.947 LBA: 0x0 00:35:14.947 Namespace: 0x0 00:35:14.947 Vendor Log Page: 0x0 00:35:14.947 ----------- 00:35:14.947 Entry: 1 00:35:14.947 Error Count: 0x2 00:35:14.947 Submission Queue Id: 0x0 00:35:14.947 Command Id: 0x5 00:35:14.947 Phase Bit: 0 00:35:14.947 Status Code: 0x2 00:35:14.947 Status Code Type: 0x0 00:35:14.947 Do Not Retry: 1 00:35:14.947 Error Location: 0x28 00:35:14.947 LBA: 0x0 00:35:14.947 Namespace: 0x0 00:35:14.947 Vendor Log Page: 0x0 00:35:14.947 ----------- 00:35:14.947 Entry: 2 00:35:14.947 Error Count: 0x1 00:35:14.947 Submission Queue Id: 0x0 00:35:14.947 Command Id: 0x0 00:35:14.947 Phase Bit: 0 00:35:14.947 Status Code: 0x2 00:35:14.947 Status Code Type: 0x0 00:35:14.947 Do Not Retry: 1 00:35:14.947 Error Location: 0x28 00:35:14.947 LBA: 0x0 00:35:14.947 Namespace: 0x0 00:35:14.947 Vendor Log Page: 0x0 00:35:14.947 00:35:14.947 Number of Queues 00:35:14.947 ================ 00:35:14.947 Number of I/O Submission Queues: 128 00:35:14.947 Number of I/O Completion Queues: 128 00:35:14.947 00:35:14.947 ZNS Specific Controller Data 00:35:14.947 ============================ 00:35:14.947 Zone Append Size Limit: 0 00:35:14.947 00:35:14.947 00:35:14.947 Active Namespaces 00:35:14.947 ================= 00:35:14.947 get_feature(0x05) failed 00:35:14.947 Namespace ID:1 00:35:14.947 Command Set Identifier: NVM (00h) 00:35:14.947 Deallocate: Supported 00:35:14.947 Deallocated/Unwritten Error: Not Supported 00:35:14.947 Deallocated Read Value: Unknown 00:35:14.947 Deallocate in Write Zeroes: Not Supported 00:35:14.947 Deallocated Guard Field: 0xFFFF 00:35:14.947 Flush: Supported 00:35:14.947 Reservation: Not Supported 00:35:14.947 Namespace Sharing Capabilities: Multiple Controllers 00:35:14.947 Size (in LBAs): 3907029168 (1863GiB) 00:35:14.947 Capacity (in LBAs): 3907029168 (1863GiB) 00:35:14.947 Utilization (in LBAs): 3907029168 (1863GiB) 00:35:14.947 UUID: 3d36b669-36ae-44d7-bb5b-f4913113178b 00:35:14.947 Thin Provisioning: Not Supported 00:35:14.947 Per-NS Atomic Units: Yes 00:35:14.947 Atomic Boundary Size (Normal): 0 00:35:14.947 Atomic Boundary Size (PFail): 0 00:35:14.947 Atomic Boundary Offset: 0 00:35:14.947 NGUID/EUI64 Never Reused: No 00:35:14.947 ANA group ID: 1 00:35:14.947 Namespace Write Protected: No 00:35:14.947 Number of LBA Formats: 1 00:35:14.947 Current LBA Format: LBA Format #00 00:35:14.947 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:14.947 00:35:14.947 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:14.947 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:14.947 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:14.948 rmmod nvme_rdma 00:35:14.948 rmmod nvme_fabrics 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:35:14.948 20:56:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:35:19.141 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:19.141 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:21.048 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:35:21.048 00:35:21.048 real 0m19.395s 00:35:21.048 user 0m4.957s 00:35:21.048 sys 0m11.652s 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:21.048 ************************************ 00:35:21.048 END TEST nvmf_identify_kernel_target 00:35:21.048 ************************************ 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.048 ************************************ 00:35:21.048 START TEST nvmf_auth_host 00:35:21.048 ************************************ 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:35:21.048 * Looking for test storage... 00:35:21.048 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.048 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:21.049 20:56:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:29.169 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:29.169 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:29.169 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:29.169 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:29.169 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:35:29.170 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:29.170 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:29.170 altname enp217s0f0np0 00:35:29.170 altname ens818f0np0 00:35:29.170 inet 192.168.100.8/24 scope global mlx_0_0 00:35:29.170 valid_lft forever preferred_lft forever 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:35:29.170 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:29.170 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:29.170 altname enp217s0f1np1 00:35:29.170 altname ens818f1np1 00:35:29.170 inet 192.168.100.9/24 scope global mlx_0_1 00:35:29.170 valid_lft forever preferred_lft forever 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:29.170 20:56:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:29.170 192.168.100.9' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:29.170 192.168.100.9' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:29.170 192.168.100.9' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1333891 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1333891 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1333891 ']' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:29.170 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.427 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:29.427 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:29.427 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:29.427 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:29.427 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d6bea3004b997b15dcad16e3c822caa8 00:35:29.685 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:29.686 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Xj9 00:35:29.686 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d6bea3004b997b15dcad16e3c822caa8 0 00:35:29.686 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d6bea3004b997b15dcad16e3c822caa8 0 00:35:29.686 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.686 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.686 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d6bea3004b997b15dcad16e3c822caa8 00:35:29.686 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:29.686 20:56:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Xj9 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Xj9 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Xj9 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c312cbf1b4f800557829d35d2995db462eb256924abcb8ad8efc9e6bea9b9bd3 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.24g 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c312cbf1b4f800557829d35d2995db462eb256924abcb8ad8efc9e6bea9b9bd3 3 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c312cbf1b4f800557829d35d2995db462eb256924abcb8ad8efc9e6bea9b9bd3 3 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c312cbf1b4f800557829d35d2995db462eb256924abcb8ad8efc9e6bea9b9bd3 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.24g 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.24g 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.24g 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=53587d777630ab1ec5fbe5fcd634f4361d6aeae6979265f8 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jxy 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 53587d777630ab1ec5fbe5fcd634f4361d6aeae6979265f8 0 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 53587d777630ab1ec5fbe5fcd634f4361d6aeae6979265f8 0 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=53587d777630ab1ec5fbe5fcd634f4361d6aeae6979265f8 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jxy 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jxy 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jxy 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d0b54add8c754a886c3a46bfaf4c03e4a30f9c1072e54439 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.f3C 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d0b54add8c754a886c3a46bfaf4c03e4a30f9c1072e54439 2 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d0b54add8c754a886c3a46bfaf4c03e4a30f9c1072e54439 2 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d0b54add8c754a886c3a46bfaf4c03e4a30f9c1072e54439 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.f3C 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.f3C 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.f3C 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:29.686 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ae38ce5a606307293c211a80b29a3ee7 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.b6K 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ae38ce5a606307293c211a80b29a3ee7 1 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ae38ce5a606307293c211a80b29a3ee7 1 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ae38ce5a606307293c211a80b29a3ee7 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.b6K 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.b6K 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.b6K 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5bed640b437f2e213f84dd1c40bd8a96 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9dv 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5bed640b437f2e213f84dd1c40bd8a96 1 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5bed640b437f2e213f84dd1c40bd8a96 1 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5bed640b437f2e213f84dd1c40bd8a96 00:35:29.949 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9dv 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9dv 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9dv 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f50044c9559651f790864c816bad17f5fd5d4e58aef2138 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DWX 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f50044c9559651f790864c816bad17f5fd5d4e58aef2138 2 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f50044c9559651f790864c816bad17f5fd5d4e58aef2138 2 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f50044c9559651f790864c816bad17f5fd5d4e58aef2138 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DWX 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DWX 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.DWX 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=54868f8a5620e41c2163381914d0ccba 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.iOG 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 54868f8a5620e41c2163381914d0ccba 0 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 54868f8a5620e41c2163381914d0ccba 0 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=54868f8a5620e41c2163381914d0ccba 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.iOG 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.iOG 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.iOG 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=daaa95726713a373c6f38ad10c0abcaca402302bc7e595acc9bcd3ba2afbd7f8 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AYR 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key daaa95726713a373c6f38ad10c0abcaca402302bc7e595acc9bcd3ba2afbd7f8 3 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 daaa95726713a373c6f38ad10c0abcaca402302bc7e595acc9bcd3ba2afbd7f8 3 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=daaa95726713a373c6f38ad10c0abcaca402302bc7e595acc9bcd3ba2afbd7f8 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:29.950 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AYR 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AYR 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.AYR 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1333891 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1333891 ']' 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:30.267 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xj9 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.24g ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.24g 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jxy 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.f3C ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.f3C 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.b6K 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9dv ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9dv 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.DWX 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.iOG ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.iOG 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.AYR 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:30.268 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:30.526 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:30.527 20:56:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:35:33.814 Waiting for block devices as requested 00:35:34.072 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:34.072 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:34.072 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:34.331 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:34.331 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:34.331 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:34.331 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:34.589 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:34.589 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:34.589 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:34.847 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:34.847 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:34.847 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:35.105 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:35.105 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:35.105 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:35.363 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:36.298 No valid GPT data, bailing 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:35:36.298 00:35:36.298 Discovery Log Number of Records 2, Generation counter 2 00:35:36.298 =====Discovery Log Entry 0====== 00:35:36.298 trtype: rdma 00:35:36.298 adrfam: ipv4 00:35:36.298 subtype: current discovery subsystem 00:35:36.298 treq: not specified, sq flow control disable supported 00:35:36.298 portid: 1 00:35:36.298 trsvcid: 4420 00:35:36.298 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:36.298 traddr: 192.168.100.8 00:35:36.298 eflags: none 00:35:36.298 rdma_prtype: not specified 00:35:36.298 rdma_qptype: connected 00:35:36.298 rdma_cms: rdma-cm 00:35:36.298 rdma_pkey: 0x0000 00:35:36.298 =====Discovery Log Entry 1====== 00:35:36.298 trtype: rdma 00:35:36.298 adrfam: ipv4 00:35:36.298 subtype: nvme subsystem 00:35:36.298 treq: not specified, sq flow control disable supported 00:35:36.298 portid: 1 00:35:36.298 trsvcid: 4420 00:35:36.298 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:36.298 traddr: 192.168.100.8 00:35:36.298 eflags: none 00:35:36.298 rdma_prtype: not specified 00:35:36.298 rdma_qptype: connected 00:35:36.298 rdma_cms: rdma-cm 00:35:36.298 rdma_pkey: 0x0000 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.298 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.557 nvme0n1 00:35:36.557 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.557 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.557 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.557 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.557 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.557 20:56:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.557 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.557 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.557 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.557 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.557 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.557 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:36.557 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:36.557 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.558 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.816 nvme0n1 00:35:36.816 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.817 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.076 nvme0n1 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.076 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.335 nvme0n1 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.335 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.336 20:56:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.595 nvme0n1 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.595 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.853 nvme0n1 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.853 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.110 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.111 nvme0n1 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.111 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.369 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.628 nvme0n1 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.628 20:56:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:38.628 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:38.629 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.629 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.887 nvme0n1 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.887 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.146 nvme0n1 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.146 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.405 nvme0n1 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.405 20:56:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.663 nvme0n1 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.663 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.921 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.922 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.180 nvme0n1 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.180 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.439 nvme0n1 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.439 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.697 20:56:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.697 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.955 nvme0n1 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.955 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.214 nvme0n1 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.214 20:56:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.782 nvme0n1 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.782 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.348 nvme0n1 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.348 20:56:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.606 nvme0n1 00:35:42.606 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.606 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.606 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.606 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.606 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.864 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.122 nvme0n1 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.122 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.380 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.380 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.380 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:43.380 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.380 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.380 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.380 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.381 20:56:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.639 nvme0n1 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.639 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.640 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.933 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.530 nvme0n1 00:35:44.530 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.531 20:56:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.098 nvme0n1 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.098 20:56:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.034 nvme0n1 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.034 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.602 nvme0n1 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.603 20:56:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.171 nvme0n1 00:35:47.171 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.171 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.171 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.171 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.171 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.171 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.171 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.172 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.431 nvme0n1 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.431 20:56:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.691 nvme0n1 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.691 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.951 nvme0n1 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.951 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.211 nvme0n1 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.211 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.470 nvme0n1 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.470 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.471 20:56:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.730 nvme0n1 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.730 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.989 nvme0n1 00:35:48.989 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.989 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.989 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.989 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.989 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.989 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.989 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.989 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.990 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.990 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.249 nvme0n1 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.249 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.509 20:56:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.769 nvme0n1 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.769 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.770 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.029 nvme0n1 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.029 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.289 nvme0n1 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.289 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.548 20:56:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.807 nvme0n1 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.807 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.808 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.067 nvme0n1 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.067 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.327 nvme0n1 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.327 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.586 20:56:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.846 nvme0n1 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.846 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.414 nvme0n1 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.414 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.415 20:56:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.674 nvme0n1 00:35:52.674 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.674 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.674 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.674 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.674 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.674 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.933 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.192 nvme0n1 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.192 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.451 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:53.451 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:53.451 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:53.451 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:53.451 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:53.451 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.451 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.451 20:56:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.711 nvme0n1 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.711 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.712 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.279 nvme0n1 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.279 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:54.280 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:54.280 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:54.280 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:54.280 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:54.280 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.280 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.280 20:56:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.855 nvme0n1 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.855 20:56:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.790 nvme0n1 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.790 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.357 nvme0n1 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.357 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.358 20:56:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.925 nvme0n1 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.925 20:56:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.565 nvme0n1 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.565 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.566 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.824 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.824 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.824 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.824 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.824 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.825 nvme0n1 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.825 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.083 nvme0n1 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.083 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.343 nvme0n1 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.343 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.602 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:58.603 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:58.603 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.603 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.603 20:56:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.603 nvme0n1 00:35:58.603 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.603 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.603 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.603 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.603 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.603 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.862 nvme0n1 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.862 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.121 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.380 nvme0n1 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:59.380 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.381 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.640 nvme0n1 00:35:59.640 20:56:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.640 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.900 nvme0n1 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.900 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.160 nvme0n1 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.160 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.420 nvme0n1 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.420 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:00.680 20:56:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:00.680 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.680 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.680 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.939 nvme0n1 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.939 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.198 nvme0n1 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.198 20:56:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.457 nvme0n1 00:36:01.457 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.716 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.975 nvme0n1 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.975 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:01.976 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:01.976 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:01.976 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:01.976 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:01.976 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:01.976 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.976 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.235 nvme0n1 00:36:02.235 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.235 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.235 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.235 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.235 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.235 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.495 20:56:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.754 nvme0n1 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.754 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.014 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.273 nvme0n1 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:36:03.273 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.532 20:56:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.791 nvme0n1 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.791 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.358 nvme0n1 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.358 20:56:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.926 nvme0n1 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZiZWEzMDA0Yjk5N2IxNWRjYWQxNmUzYzgyMmNhYTgomCzo: 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzMxMmNiZjFiNGY4MDA1NTc4MjlkMzVkMjk5NWRiNDYyZWIyNTY5MjRhYmNiOGFkOGVmYzllNmJlYTliOWJkM0cgiJ0=: 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.926 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.491 nvme0n1 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.491 20:56:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.056 nvme0n1 00:36:06.056 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.056 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.056 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.056 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.056 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.056 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWUzOGNlNWE2MDYzMDcyOTNjMjExYTgwYjI5YTNlZTcHSzmk: 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: ]] 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJlZDY0MGI0MzdmMmUyMTNmODRkZDFjNDBiZDhhOTZ/uLVv: 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.314 20:56:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.881 nvme0n1 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY1MDA0NGM5NTU5NjUxZjc5MDg2NGM4MTZiYWQxN2Y1ZmQ1ZDRlNThhZWYyMTM4TWu58w==: 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTQ4NjhmOGE1NjIwZTQxYzIxNjMzODE5MTRkMGNjYmE45B9f: 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.881 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.447 nvme0n1 00:36:07.447 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.447 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.447 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.448 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.448 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.448 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.448 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.448 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.448 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.448 20:56:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGFhYTk1NzI2NzEzYTM3M2M2ZjM4YWQxMGMwYWJjYWNhNDAyMzAyYmM3ZTU5NWFjYzliY2QzYmEyYWZiZDdmOAqxsXo=: 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.706 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.273 nvme0n1 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTM1ODdkNzc3NjMwYWIxZWM1ZmJlNWZjZDYzNGY0MzYxZDZhZWFlNjk3OTI2NWY4sEMSiA==: 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDBiNTRhZGQ4Yzc1NGE4ODZjM2E0NmJmYWY0YzAzZTRhMzBmOWMxMDcyZTU0NDM5zf3R5Q==: 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.273 request: 00:36:08.273 { 00:36:08.273 "name": "nvme0", 00:36:08.273 "trtype": "rdma", 00:36:08.273 "traddr": "192.168.100.8", 00:36:08.273 "adrfam": "ipv4", 00:36:08.273 "trsvcid": "4420", 00:36:08.273 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.273 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.273 "prchk_reftag": false, 00:36:08.273 "prchk_guard": false, 00:36:08.273 "hdgst": false, 00:36:08.273 "ddgst": false, 00:36:08.273 "method": "bdev_nvme_attach_controller", 00:36:08.273 "req_id": 1 00:36:08.273 } 00:36:08.273 Got JSON-RPC error response 00:36:08.273 response: 00:36:08.273 { 00:36:08.273 "code": -5, 00:36:08.273 "message": "Input/output error" 00:36:08.273 } 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.273 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.532 request: 00:36:08.532 { 00:36:08.532 "name": "nvme0", 00:36:08.532 "trtype": "rdma", 00:36:08.532 "traddr": "192.168.100.8", 00:36:08.532 "adrfam": "ipv4", 00:36:08.532 "trsvcid": "4420", 00:36:08.532 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.532 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.532 "prchk_reftag": false, 00:36:08.532 "prchk_guard": false, 00:36:08.532 "hdgst": false, 00:36:08.532 "ddgst": false, 00:36:08.532 "dhchap_key": "key2", 00:36:08.532 "method": "bdev_nvme_attach_controller", 00:36:08.532 "req_id": 1 00:36:08.532 } 00:36:08.532 Got JSON-RPC error response 00:36:08.532 response: 00:36:08.532 { 00:36:08.532 "code": -5, 00:36:08.532 "message": "Input/output error" 00:36:08.532 } 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.532 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.533 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:08.533 20:56:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.533 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:08.533 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.533 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.792 request: 00:36:08.792 { 00:36:08.792 "name": "nvme0", 00:36:08.792 "trtype": "rdma", 00:36:08.792 "traddr": "192.168.100.8", 00:36:08.792 "adrfam": "ipv4", 00:36:08.792 "trsvcid": "4420", 00:36:08.792 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:08.792 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:08.792 "prchk_reftag": false, 00:36:08.792 "prchk_guard": false, 00:36:08.792 "hdgst": false, 00:36:08.792 "ddgst": false, 00:36:08.792 "dhchap_key": "key1", 00:36:08.792 "dhchap_ctrlr_key": "ckey2", 00:36:08.792 "method": "bdev_nvme_attach_controller", 00:36:08.792 "req_id": 1 00:36:08.792 } 00:36:08.792 Got JSON-RPC error response 00:36:08.792 response: 00:36:08.792 { 00:36:08.792 "code": -5, 00:36:08.792 "message": "Input/output error" 00:36:08.792 } 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:36:08.792 rmmod nvme_rdma 00:36:08.792 rmmod nvme_fabrics 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1333891 ']' 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1333891 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1333891 ']' 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1333891 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333891 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333891' 00:36:08.792 killing process with pid 1333891 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1333891 00:36:08.792 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1333891 00:36:09.051 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:36:09.052 20:56:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:12.374 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:12.374 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:12.374 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:12.374 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:12.374 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:12.374 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:12.374 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:12.633 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:14.535 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:36:14.793 20:57:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Xj9 /tmp/spdk.key-null.jxy /tmp/spdk.key-sha256.b6K /tmp/spdk.key-sha384.DWX /tmp/spdk.key-sha512.AYR /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:36:14.793 20:57:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:18.979 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:18.979 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:18.980 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:18.980 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:18.980 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:18.980 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:18.980 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:18.980 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:18.980 00:36:18.980 real 0m57.804s 00:36:18.980 user 0m49.216s 00:36:18.980 sys 0m16.952s 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.980 ************************************ 00:36:18.980 END TEST nvmf_auth_host 00:36:18.980 ************************************ 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.980 ************************************ 00:36:18.980 START TEST nvmf_bdevperf 00:36:18.980 ************************************ 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:36:18.980 * Looking for test storage... 00:36:18.980 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:18.980 20:57:07 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:27.101 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:27.101 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:27.101 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:27.101 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:27.101 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:36:27.102 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:27.102 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:27.102 altname enp217s0f0np0 00:36:27.102 altname ens818f0np0 00:36:27.102 inet 192.168.100.8/24 scope global mlx_0_0 00:36:27.102 valid_lft forever preferred_lft forever 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:36:27.102 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:27.102 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:27.102 altname enp217s0f1np1 00:36:27.102 altname ens818f1np1 00:36:27.102 inet 192.168.100.9/24 scope global mlx_0_1 00:36:27.102 valid_lft forever preferred_lft forever 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:36:27.102 192.168.100.9' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:36:27.102 192.168.100.9' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:36:27.102 192.168.100.9' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1349825 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1349825 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1349825 ']' 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:27.102 20:57:15 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:27.102 [2024-07-26 20:57:15.550549] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:36:27.102 [2024-07-26 20:57:15.550599] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:27.102 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.102 [2024-07-26 20:57:15.635477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:27.361 [2024-07-26 20:57:15.676553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:27.361 [2024-07-26 20:57:15.676591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:27.361 [2024-07-26 20:57:15.676600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:27.361 [2024-07-26 20:57:15.676609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:27.361 [2024-07-26 20:57:15.676616] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:27.361 [2024-07-26 20:57:15.676719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:27.361 [2024-07-26 20:57:15.676802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:27.361 [2024-07-26 20:57:15.676803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.929 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:27.929 [2024-07-26 20:57:16.444681] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b51520/0x1b55a10) succeed. 00:36:27.929 [2024-07-26 20:57:16.453736] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b52ac0/0x1b970a0) succeed. 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.188 Malloc0 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.188 [2024-07-26 20:57:16.600996] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:28.188 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:28.188 { 00:36:28.188 "params": { 00:36:28.188 "name": "Nvme$subsystem", 00:36:28.188 "trtype": "$TEST_TRANSPORT", 00:36:28.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:28.188 "adrfam": "ipv4", 00:36:28.188 "trsvcid": "$NVMF_PORT", 00:36:28.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:28.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:28.189 "hdgst": ${hdgst:-false}, 00:36:28.189 "ddgst": ${ddgst:-false} 00:36:28.189 }, 00:36:28.189 "method": "bdev_nvme_attach_controller" 00:36:28.189 } 00:36:28.189 EOF 00:36:28.189 )") 00:36:28.189 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:28.189 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:28.189 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:28.189 20:57:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:28.189 "params": { 00:36:28.189 "name": "Nvme1", 00:36:28.189 "trtype": "rdma", 00:36:28.189 "traddr": "192.168.100.8", 00:36:28.189 "adrfam": "ipv4", 00:36:28.189 "trsvcid": "4420", 00:36:28.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:28.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:28.189 "hdgst": false, 00:36:28.189 "ddgst": false 00:36:28.189 }, 00:36:28.189 "method": "bdev_nvme_attach_controller" 00:36:28.189 }' 00:36:28.189 [2024-07-26 20:57:16.653100] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:36:28.189 [2024-07-26 20:57:16.653150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350104 ] 00:36:28.189 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.189 [2024-07-26 20:57:16.739562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.447 [2024-07-26 20:57:16.778302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.447 Running I/O for 1 seconds... 00:36:29.824 00:36:29.824 Latency(us) 00:36:29.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.824 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:29.824 Verification LBA range: start 0x0 length 0x4000 00:36:29.824 Nvme1n1 : 1.00 18346.41 71.67 0.00 0.00 6939.09 2713.19 11796.48 00:36:29.824 =================================================================================================================== 00:36:29.824 Total : 18346.41 71.67 0.00 0.00 6939.09 2713.19 11796.48 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1350370 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:29.824 { 00:36:29.824 "params": { 00:36:29.824 "name": "Nvme$subsystem", 00:36:29.824 "trtype": "$TEST_TRANSPORT", 00:36:29.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.824 "adrfam": "ipv4", 00:36:29.824 "trsvcid": "$NVMF_PORT", 00:36:29.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.824 "hdgst": ${hdgst:-false}, 00:36:29.824 "ddgst": ${ddgst:-false} 00:36:29.824 }, 00:36:29.824 "method": "bdev_nvme_attach_controller" 00:36:29.824 } 00:36:29.824 EOF 00:36:29.824 )") 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:29.824 20:57:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:29.824 "params": { 00:36:29.824 "name": "Nvme1", 00:36:29.824 "trtype": "rdma", 00:36:29.824 "traddr": "192.168.100.8", 00:36:29.824 "adrfam": "ipv4", 00:36:29.824 "trsvcid": "4420", 00:36:29.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:29.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:29.824 "hdgst": false, 00:36:29.824 "ddgst": false 00:36:29.824 }, 00:36:29.824 "method": "bdev_nvme_attach_controller" 00:36:29.824 }' 00:36:29.824 [2024-07-26 20:57:18.200359] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:36:29.824 [2024-07-26 20:57:18.200415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350370 ] 00:36:29.824 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.824 [2024-07-26 20:57:18.285754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.824 [2024-07-26 20:57:18.323778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.082 Running I/O for 15 seconds... 00:36:32.616 20:57:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1349825 00:36:32.616 20:57:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:34.062 [2024-07-26 20:57:22.191635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.191981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.191993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.062 [2024-07-26 20:57:22.192168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184400 00:36:34.062 [2024-07-26 20:57:22.192176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.063 [2024-07-26 20:57:22.192861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184400 00:36:34.063 [2024-07-26 20:57:22.192869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.192879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.192888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.192897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.192906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.192918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.192927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.192937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.192945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.192955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.192964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.192974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.192984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.192994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.064 [2024-07-26 20:57:22.193524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x184400 00:36:34.064 [2024-07-26 20:57:22.193533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184400 00:36:34.065 [2024-07-26 20:57:22.193552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184400 00:36:34.065 [2024-07-26 20:57:22.193572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184400 00:36:34.065 [2024-07-26 20:57:22.193591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184400 00:36:34.065 [2024-07-26 20:57:22.193610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184400 00:36:34.065 [2024-07-26 20:57:22.193639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.193984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.193993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.194003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.194012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.194023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.194032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.194041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.194049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.194059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.194068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.194078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.194086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.194096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.203394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.203476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.065 [2024-07-26 20:57:22.203516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a91c0000 sqhd:52b0 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.205749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:34.065 [2024-07-26 20:57:22.205789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:34.065 [2024-07-26 20:57:22.205820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129232 len:8 PRP1 0x0 PRP2 0x0 00:36:34.065 [2024-07-26 20:57:22.205854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.205939] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:36:34.065 [2024-07-26 20:57:22.206022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:34.065 [2024-07-26 20:57:22.206059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.206094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:34.065 [2024-07-26 20:57:22.206125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.206159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:34.065 [2024-07-26 20:57:22.206192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.206225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:34.065 [2024-07-26 20:57:22.206264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.065 [2024-07-26 20:57:22.224922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:34.065 [2024-07-26 20:57:22.224982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.065 [2024-07-26 20:57:22.225014] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:34.066 [2024-07-26 20:57:22.227908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.066 [2024-07-26 20:57:22.230667] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:34.066 [2024-07-26 20:57:22.230686] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:34.066 [2024-07-26 20:57:22.230695] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:36:35.002 [2024-07-26 20:57:23.234718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:35.002 [2024-07-26 20:57:23.234776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.002 [2024-07-26 20:57:23.235101] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.002 [2024-07-26 20:57:23.235116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.002 [2024-07-26 20:57:23.235130] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:36:35.002 [2024-07-26 20:57:23.238683] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:35.002 [2024-07-26 20:57:23.238903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.002 [2024-07-26 20:57:23.251487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.002 [2024-07-26 20:57:23.254135] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:35.002 [2024-07-26 20:57:23.254154] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:35.002 [2024-07-26 20:57:23.254162] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:36:35.938 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1349825 Killed "${NVMF_APP[@]}" "$@" 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1351211 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1351211 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1351211 ']' 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:35.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:35.938 20:57:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:35.938 [2024-07-26 20:57:24.227135] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:36:35.938 [2024-07-26 20:57:24.227187] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:35.938 [2024-07-26 20:57:24.258113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:35.938 [2024-07-26 20:57:24.258137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.938 [2024-07-26 20:57:24.258309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.938 [2024-07-26 20:57:24.258321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.938 [2024-07-26 20:57:24.258332] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:36:35.938 [2024-07-26 20:57:24.260972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.938 [2024-07-26 20:57:24.263806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.938 [2024-07-26 20:57:24.266250] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:35.938 [2024-07-26 20:57:24.266271] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:35.938 [2024-07-26 20:57:24.266279] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:36:35.938 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.938 [2024-07-26 20:57:24.315256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:35.938 [2024-07-26 20:57:24.355371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:35.938 [2024-07-26 20:57:24.355409] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:35.938 [2024-07-26 20:57:24.355418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:35.938 [2024-07-26 20:57:24.355427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:35.938 [2024-07-26 20:57:24.355434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:35.938 [2024-07-26 20:57:24.355482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:35.938 [2024-07-26 20:57:24.355566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:35.938 [2024-07-26 20:57:24.355568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.506 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:36.506 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:36:36.506 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:36.506 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:36.506 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:36.765 [2024-07-26 20:57:25.109415] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x210c520/0x2110a10) succeed. 00:36:36.765 [2024-07-26 20:57:25.118542] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x210dac0/0x21520a0) succeed. 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:36.765 Malloc0 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:36.765 [2024-07-26 20:57:25.262340] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.765 20:57:25 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1350370 00:36:36.765 [2024-07-26 20:57:25.270188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:36.765 [2024-07-26 20:57:25.270212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.765 [2024-07-26 20:57:25.270227] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:36.765 [2024-07-26 20:57:25.270400] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.765 [2024-07-26 20:57:25.270412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.765 [2024-07-26 20:57:25.270423] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:36:36.765 [2024-07-26 20:57:25.273086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.765 [2024-07-26 20:57:25.281747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.024 [2024-07-26 20:57:25.326445] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:45.146 00:36:45.146 Latency(us) 00:36:45.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.146 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:45.146 Verification LBA range: start 0x0 length 0x4000 00:36:45.146 Nvme1n1 : 15.01 13272.65 51.85 10757.71 0.00 5308.20 353.89 1067030.94 00:36:45.146 =================================================================================================================== 00:36:45.146 Total : 13272.65 51.85 10757.71 0.00 5308.20 353.89 1067030.94 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:36:45.405 rmmod nvme_rdma 00:36:45.405 rmmod nvme_fabrics 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1351211 ']' 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1351211 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1351211 ']' 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1351211 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1351211 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1351211' 00:36:45.405 killing process with pid 1351211 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1351211 00:36:45.405 20:57:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1351211 00:36:45.663 20:57:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:45.663 20:57:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:36:45.663 00:36:45.663 real 0m26.833s 00:36:45.663 user 1m4.599s 00:36:45.663 sys 0m7.527s 00:36:45.663 20:57:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.663 20:57:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:45.664 ************************************ 00:36:45.664 END TEST nvmf_bdevperf 00:36:45.664 ************************************ 00:36:45.664 20:57:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:36:45.664 20:57:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:45.664 20:57:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:45.664 20:57:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.664 ************************************ 00:36:45.664 START TEST nvmf_target_disconnect 00:36:45.664 ************************************ 00:36:45.664 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:36:45.922 * Looking for test storage... 00:36:45.922 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:45.922 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:36:45.923 20:57:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:36:54.042 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:54.043 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:54.043 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:54.043 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:54.043 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:36:54.043 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:54.043 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:54.043 altname enp217s0f0np0 00:36:54.043 altname ens818f0np0 00:36:54.043 inet 192.168.100.8/24 scope global mlx_0_0 00:36:54.043 valid_lft forever preferred_lft forever 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:36:54.043 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:54.043 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:54.043 altname enp217s0f1np1 00:36:54.043 altname ens818f1np1 00:36:54.043 inet 192.168.100.9/24 scope global mlx_0_1 00:36:54.043 valid_lft forever preferred_lft forever 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:54.043 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:36:54.044 192.168.100.9' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:36:54.044 192.168.100.9' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:36:54.044 192.168.100.9' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:54.044 20:57:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:54.044 ************************************ 00:36:54.044 START TEST nvmf_target_disconnect_tc1 00:36:54.044 ************************************ 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:36:54.044 20:57:42 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:54.044 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.044 [2024-07-26 20:57:42.177325] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:54.044 [2024-07-26 20:57:42.177367] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:54.044 [2024-07-26 20:57:42.177376] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:36:54.979 [2024-07-26 20:57:43.181275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:54.979 [2024-07-26 20:57:43.181306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:36:54.979 [2024-07-26 20:57:43.181317] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:36:54.979 [2024-07-26 20:57:43.181344] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:54.979 [2024-07-26 20:57:43.181353] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:36:54.979 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:36:54.979 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:54.979 Initializing NVMe Controllers 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:54.979 00:36:54.979 real 0m1.154s 00:36:54.979 user 0m0.872s 00:36:54.979 sys 0m0.270s 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:54.979 ************************************ 00:36:54.979 END TEST nvmf_target_disconnect_tc1 00:36:54.979 ************************************ 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:54.979 ************************************ 00:36:54.979 START TEST nvmf_target_disconnect_tc2 00:36:54.979 ************************************ 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1356989 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1356989 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1356989 ']' 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:54.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:54.979 20:57:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:54.979 [2024-07-26 20:57:43.328591] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:36:54.979 [2024-07-26 20:57:43.328641] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:54.979 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.979 [2024-07-26 20:57:43.424952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:54.979 [2024-07-26 20:57:43.465147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:54.979 [2024-07-26 20:57:43.465184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:54.979 [2024-07-26 20:57:43.465194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:54.979 [2024-07-26 20:57:43.465203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:54.979 [2024-07-26 20:57:43.465210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:54.979 [2024-07-26 20:57:43.465326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:54.979 [2024-07-26 20:57:43.465439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:54.979 [2024-07-26 20:57:43.465548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:54.979 [2024-07-26 20:57:43.465550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.914 Malloc0 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.914 [2024-07-26 20:57:44.218122] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xed9020/0xee5530) succeed. 00:36:55.914 [2024-07-26 20:57:44.227754] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeda660/0xf655d0) succeed. 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.914 [2024-07-26 20:57:44.366170] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1357261 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:55.914 20:57:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:55.914 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.862 20:57:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1356989 00:36:57.862 20:57:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:59.242 Read completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Write completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Read completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Read completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Write completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Read completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Read completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Write completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Write completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Read completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Read completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Write completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Write completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Read completed with error (sct=0, sc=8) 00:36:59.242 starting I/O failed 00:36:59.242 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Read completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 Write completed with error (sct=0, sc=8) 00:36:59.243 starting I/O failed 00:36:59.243 [2024-07-26 20:57:47.580496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:59.243 [2024-07-26 20:57:47.582056] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:59.243 [2024-07-26 20:57:47.582076] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:59.243 [2024-07-26 20:57:47.582085] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:00.181 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1356989 Killed "${NVMF_APP[@]}" "$@" 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1357817 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1357817 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1357817 ']' 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:00.181 20:57:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.181 [2024-07-26 20:57:48.443798] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:37:00.181 [2024-07-26 20:57:48.443853] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.181 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.181 [2024-07-26 20:57:48.545151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:00.181 [2024-07-26 20:57:48.583740] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.181 [2024-07-26 20:57:48.583786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.181 [2024-07-26 20:57:48.583796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.181 [2024-07-26 20:57:48.583806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.181 [2024-07-26 20:57:48.583813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.181 [2024-07-26 20:57:48.583934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:00.181 [2024-07-26 20:57:48.584046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:00.181 [2024-07-26 20:57:48.584154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:00.181 [2024-07-26 20:57:48.584156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:00.181 [2024-07-26 20:57:48.585867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:00.181 qpair failed and we were unable to recover it. 00:37:00.181 [2024-07-26 20:57:48.587422] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:00.181 [2024-07-26 20:57:48.587441] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:00.181 [2024-07-26 20:57:48.587450] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:00.750 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:00.750 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:00.750 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:00.750 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:00.750 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.750 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:00.750 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:00.750 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.009 Malloc0 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.009 [2024-07-26 20:57:49.349527] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf6c020/0xf78530) succeed. 00:37:01.009 [2024-07-26 20:57:49.359399] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf6d660/0xff85d0) succeed. 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.009 [2024-07-26 20:57:49.498691] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.009 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.010 20:57:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1357261 00:37:01.269 [2024-07-26 20:57:49.591588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.269 qpair failed and we were unable to recover it. 00:37:01.269 [2024-07-26 20:57:49.596716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.269 [2024-07-26 20:57:49.596774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.269 [2024-07-26 20:57:49.596794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.269 [2024-07-26 20:57:49.596805] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.269 [2024-07-26 20:57:49.596814] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.269 [2024-07-26 20:57:49.607267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.269 qpair failed and we were unable to recover it. 00:37:01.269 [2024-07-26 20:57:49.616786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.616823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.616841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.616850] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.616860] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.627071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.636855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.636893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.636911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.636921] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.636930] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.647268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.656855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.656901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.656918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.656928] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.656936] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.667478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.676978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.677022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.677042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.677052] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.677061] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.687381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.697000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.697038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.697056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.697065] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.697075] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.707255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.717159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.717198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.717215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.717225] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.717234] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.727468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.737155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.737196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.737213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.737223] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.737232] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.747707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.757154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.757201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.757218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.757227] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.757239] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.767398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.777275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.777310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.777326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.777336] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.777345] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.787781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.797333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.797368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.797385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.797395] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.797404] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.270 [2024-07-26 20:57:49.807642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.270 qpair failed and we were unable to recover it. 00:37:01.270 [2024-07-26 20:57:49.817421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.270 [2024-07-26 20:57:49.817459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.270 [2024-07-26 20:57:49.817475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.270 [2024-07-26 20:57:49.817485] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.270 [2024-07-26 20:57:49.817494] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.529 [2024-07-26 20:57:49.827594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.529 qpair failed and we were unable to recover it. 00:37:01.529 [2024-07-26 20:57:49.837209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.529 [2024-07-26 20:57:49.837256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.529 [2024-07-26 20:57:49.837271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.529 [2024-07-26 20:57:49.837281] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.529 [2024-07-26 20:57:49.837291] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.529 [2024-07-26 20:57:49.847500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.529 qpair failed and we were unable to recover it. 00:37:01.529 [2024-07-26 20:57:49.857489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.529 [2024-07-26 20:57:49.857534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.529 [2024-07-26 20:57:49.857551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.529 [2024-07-26 20:57:49.857560] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.529 [2024-07-26 20:57:49.857570] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.529 [2024-07-26 20:57:49.868020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.529 qpair failed and we were unable to recover it. 00:37:01.529 [2024-07-26 20:57:49.877464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.529 [2024-07-26 20:57:49.877504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.529 [2024-07-26 20:57:49.877521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.529 [2024-07-26 20:57:49.877531] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.529 [2024-07-26 20:57:49.877540] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.529 [2024-07-26 20:57:49.888143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.529 qpair failed and we were unable to recover it. 00:37:01.529 [2024-07-26 20:57:49.897597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.529 [2024-07-26 20:57:49.897642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.529 [2024-07-26 20:57:49.897659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.529 [2024-07-26 20:57:49.897669] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.529 [2024-07-26 20:57:49.897679] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.529 [2024-07-26 20:57:49.908000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.529 qpair failed and we were unable to recover it. 00:37:01.529 [2024-07-26 20:57:49.917631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.529 [2024-07-26 20:57:49.917674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.530 [2024-07-26 20:57:49.917691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.530 [2024-07-26 20:57:49.917701] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.530 [2024-07-26 20:57:49.917710] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.530 [2024-07-26 20:57:49.927946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.530 qpair failed and we were unable to recover it. 00:37:01.530 [2024-07-26 20:57:49.937518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.530 [2024-07-26 20:57:49.937554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.530 [2024-07-26 20:57:49.937570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.530 [2024-07-26 20:57:49.937585] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.530 [2024-07-26 20:57:49.937594] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.530 [2024-07-26 20:57:49.947942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.530 qpair failed and we were unable to recover it. 00:37:01.530 [2024-07-26 20:57:49.957646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.530 [2024-07-26 20:57:49.957684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.530 [2024-07-26 20:57:49.957700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.530 [2024-07-26 20:57:49.957709] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.530 [2024-07-26 20:57:49.957718] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.530 [2024-07-26 20:57:49.967920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.530 qpair failed and we were unable to recover it. 00:37:01.530 [2024-07-26 20:57:49.977676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.530 [2024-07-26 20:57:49.977717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.530 [2024-07-26 20:57:49.977732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.530 [2024-07-26 20:57:49.977742] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.530 [2024-07-26 20:57:49.977750] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.530 [2024-07-26 20:57:49.988056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.530 qpair failed and we were unable to recover it. 00:37:01.530 [2024-07-26 20:57:49.997673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.530 [2024-07-26 20:57:49.997716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.530 [2024-07-26 20:57:49.997731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.530 [2024-07-26 20:57:49.997741] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.530 [2024-07-26 20:57:49.997750] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.530 [2024-07-26 20:57:50.008045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.530 qpair failed and we were unable to recover it. 00:37:01.530 [2024-07-26 20:57:50.017802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.530 [2024-07-26 20:57:50.017842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.530 [2024-07-26 20:57:50.017858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.530 [2024-07-26 20:57:50.017868] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.530 [2024-07-26 20:57:50.017876] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.530 [2024-07-26 20:57:50.028242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.530 qpair failed and we were unable to recover it. 00:37:01.530 [2024-07-26 20:57:50.037879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:01.530 [2024-07-26 20:57:50.037914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:01.530 [2024-07-26 20:57:50.037930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:01.530 [2024-07-26 20:57:50.037940] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:01.530 [2024-07-26 20:57:50.037949] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:01.530 [2024-07-26 20:57:50.048656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:01.530 qpair failed and we were unable to recover it. 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Read completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 Write completed with error (sct=0, sc=8) 00:37:02.906 starting I/O failed 00:37:02.906 [2024-07-26 20:57:51.053450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.906 [2024-07-26 20:57:51.060692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.906 [2024-07-26 20:57:51.060742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.906 [2024-07-26 20:57:51.060762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.906 [2024-07-26 20:57:51.060772] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.906 [2024-07-26 20:57:51.060782] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.906 [2024-07-26 20:57:51.071268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.906 qpair failed and we were unable to recover it. 00:37:02.906 [2024-07-26 20:57:51.081024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.906 [2024-07-26 20:57:51.081070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.906 [2024-07-26 20:57:51.081088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.906 [2024-07-26 20:57:51.081098] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.906 [2024-07-26 20:57:51.081107] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.906 [2024-07-26 20:57:51.091192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.906 qpair failed and we were unable to recover it. 00:37:02.906 [2024-07-26 20:57:51.101016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.906 [2024-07-26 20:57:51.101060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.906 [2024-07-26 20:57:51.101077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.906 [2024-07-26 20:57:51.101087] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.906 [2024-07-26 20:57:51.101096] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.906 [2024-07-26 20:57:51.111624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.906 qpair failed and we were unable to recover it. 00:37:02.906 [2024-07-26 20:57:51.121108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.906 [2024-07-26 20:57:51.121145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.906 [2024-07-26 20:57:51.121162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.906 [2024-07-26 20:57:51.121172] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.906 [2024-07-26 20:57:51.121180] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.906 [2024-07-26 20:57:51.131448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.906 qpair failed and we were unable to recover it. 00:37:02.906 [2024-07-26 20:57:51.141355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.906 [2024-07-26 20:57:51.141393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.906 [2024-07-26 20:57:51.141409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.141419] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.141428] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.151532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.161215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.161252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.161269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.161281] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.161290] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.171667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.181235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.181274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.181290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.181300] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.181308] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.191648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.201249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.201287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.201304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.201313] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.201322] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.211777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.221353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.221393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.221410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.221419] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.221428] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.231759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.241388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.241428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.241445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.241454] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.241463] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.251843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.261526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.261566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.261583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.261592] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.261602] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.271973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.281596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.281640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.281656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.281666] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.281675] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.291933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.301514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.301551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.301567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.301577] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.301586] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.312040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.321683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.321721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.321737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.321747] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.321756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.332033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.341692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.341730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.341749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.341759] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.341769] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.352102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.361811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.361850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.361867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.907 [2024-07-26 20:57:51.361877] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.907 [2024-07-26 20:57:51.361885] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.907 [2024-07-26 20:57:51.372269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.907 qpair failed and we were unable to recover it. 00:37:02.907 [2024-07-26 20:57:51.381905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.907 [2024-07-26 20:57:51.381945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.907 [2024-07-26 20:57:51.381962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.908 [2024-07-26 20:57:51.381971] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.908 [2024-07-26 20:57:51.381980] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.908 [2024-07-26 20:57:51.392325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.908 qpair failed and we were unable to recover it. 00:37:02.908 [2024-07-26 20:57:51.401908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.908 [2024-07-26 20:57:51.401946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.908 [2024-07-26 20:57:51.401962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.908 [2024-07-26 20:57:51.401972] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.908 [2024-07-26 20:57:51.401980] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.908 [2024-07-26 20:57:51.412554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.908 qpair failed and we were unable to recover it. 00:37:02.908 [2024-07-26 20:57:51.422039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.908 [2024-07-26 20:57:51.422079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.908 [2024-07-26 20:57:51.422096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.908 [2024-07-26 20:57:51.422105] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.908 [2024-07-26 20:57:51.422117] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.908 [2024-07-26 20:57:51.432452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.908 qpair failed and we were unable to recover it. 00:37:02.908 [2024-07-26 20:57:51.442128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.908 [2024-07-26 20:57:51.442167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.908 [2024-07-26 20:57:51.442184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.908 [2024-07-26 20:57:51.442193] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.908 [2024-07-26 20:57:51.442202] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:02.908 [2024-07-26 20:57:51.452655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:02.908 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.462197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.462236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.462254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.462264] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.462272] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.472911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.482215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.482252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.482269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.482278] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.482287] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.492596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.502231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.502271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.502287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.502297] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.502306] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.512805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.522290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.522324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.522341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.522351] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.522360] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.532688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.542335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.542374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.542390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.542400] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.542409] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.552741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.562328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.562363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.562379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.562388] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.562397] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.572969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.582491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.582529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.582545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.582555] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.582564] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.592979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.602358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.602399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.602415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.602428] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.602437] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.612997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.622674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.622714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.168 [2024-07-26 20:57:51.622738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.168 [2024-07-26 20:57:51.622748] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.168 [2024-07-26 20:57:51.622757] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.168 [2024-07-26 20:57:51.633092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.168 qpair failed and we were unable to recover it. 00:37:03.168 [2024-07-26 20:57:51.642632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.168 [2024-07-26 20:57:51.642671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.169 [2024-07-26 20:57:51.642687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.169 [2024-07-26 20:57:51.642696] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.169 [2024-07-26 20:57:51.642705] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.169 [2024-07-26 20:57:51.653226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.169 qpair failed and we were unable to recover it. 00:37:03.169 [2024-07-26 20:57:51.662723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.169 [2024-07-26 20:57:51.662762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.169 [2024-07-26 20:57:51.662779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.169 [2024-07-26 20:57:51.662788] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.169 [2024-07-26 20:57:51.662797] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.169 [2024-07-26 20:57:51.673191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.169 qpair failed and we were unable to recover it. 00:37:03.169 [2024-07-26 20:57:51.682757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.169 [2024-07-26 20:57:51.682794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.169 [2024-07-26 20:57:51.682810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.169 [2024-07-26 20:57:51.682819] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.169 [2024-07-26 20:57:51.682828] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.169 [2024-07-26 20:57:51.693316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.169 qpair failed and we were unable to recover it. 00:37:03.169 [2024-07-26 20:57:51.702754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.169 [2024-07-26 20:57:51.702791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.169 [2024-07-26 20:57:51.702809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.169 [2024-07-26 20:57:51.702819] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.169 [2024-07-26 20:57:51.702827] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.169 [2024-07-26 20:57:51.713222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.169 qpair failed and we were unable to recover it. 00:37:03.429 [2024-07-26 20:57:51.722980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.429 [2024-07-26 20:57:51.723018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.429 [2024-07-26 20:57:51.723037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.429 [2024-07-26 20:57:51.723046] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.429 [2024-07-26 20:57:51.723055] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.429 [2024-07-26 20:57:51.733427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.429 qpair failed and we were unable to recover it. 00:37:03.429 [2024-07-26 20:57:51.743006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.429 [2024-07-26 20:57:51.743049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.429 [2024-07-26 20:57:51.743067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.429 [2024-07-26 20:57:51.743076] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.429 [2024-07-26 20:57:51.743086] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.429 [2024-07-26 20:57:51.753597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.429 qpair failed and we were unable to recover it. 00:37:03.429 [2024-07-26 20:57:51.763090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.429 [2024-07-26 20:57:51.763130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.429 [2024-07-26 20:57:51.763146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.429 [2024-07-26 20:57:51.763156] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.429 [2024-07-26 20:57:51.763164] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.429 [2024-07-26 20:57:51.773535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.429 qpair failed and we were unable to recover it. 00:37:03.429 [2024-07-26 20:57:51.783112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.429 [2024-07-26 20:57:51.783149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.429 [2024-07-26 20:57:51.783169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.429 [2024-07-26 20:57:51.783178] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.429 [2024-07-26 20:57:51.783187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.429 [2024-07-26 20:57:51.793606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.429 qpair failed and we were unable to recover it. 00:37:03.429 [2024-07-26 20:57:51.803169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.429 [2024-07-26 20:57:51.803209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.429 [2024-07-26 20:57:51.803225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.429 [2024-07-26 20:57:51.803234] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.803243] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.813727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.430 [2024-07-26 20:57:51.823165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.430 [2024-07-26 20:57:51.823199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.430 [2024-07-26 20:57:51.823215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.430 [2024-07-26 20:57:51.823225] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.823234] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.833722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.430 [2024-07-26 20:57:51.843253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.430 [2024-07-26 20:57:51.843290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.430 [2024-07-26 20:57:51.843307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.430 [2024-07-26 20:57:51.843316] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.843325] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.853606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.430 [2024-07-26 20:57:51.863386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.430 [2024-07-26 20:57:51.863426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.430 [2024-07-26 20:57:51.863441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.430 [2024-07-26 20:57:51.863451] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.863462] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.873837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.430 [2024-07-26 20:57:51.883497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.430 [2024-07-26 20:57:51.883539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.430 [2024-07-26 20:57:51.883555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.430 [2024-07-26 20:57:51.883564] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.883573] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.894005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.430 [2024-07-26 20:57:51.903449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.430 [2024-07-26 20:57:51.903481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.430 [2024-07-26 20:57:51.903497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.430 [2024-07-26 20:57:51.903507] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.903515] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.914083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.430 [2024-07-26 20:57:51.923523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.430 [2024-07-26 20:57:51.923558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.430 [2024-07-26 20:57:51.923574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.430 [2024-07-26 20:57:51.923584] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.923592] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.933927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.430 [2024-07-26 20:57:51.943653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.430 [2024-07-26 20:57:51.943694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.430 [2024-07-26 20:57:51.943710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.430 [2024-07-26 20:57:51.943719] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.943728] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.954141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.430 [2024-07-26 20:57:51.963588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.430 [2024-07-26 20:57:51.963637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.430 [2024-07-26 20:57:51.963653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.430 [2024-07-26 20:57:51.963663] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.430 [2024-07-26 20:57:51.963672] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.430 [2024-07-26 20:57:51.974312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.430 qpair failed and we were unable to recover it. 00:37:03.690 [2024-07-26 20:57:51.983735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.690 [2024-07-26 20:57:51.983778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.690 [2024-07-26 20:57:51.983794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.690 [2024-07-26 20:57:51.983804] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.690 [2024-07-26 20:57:51.983813] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.690 [2024-07-26 20:57:51.994233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.690 qpair failed and we were unable to recover it. 00:37:03.690 [2024-07-26 20:57:52.003777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.003814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.003830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.003840] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.003849] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.014232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.023752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.023792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.023809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.023818] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.023828] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.034231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.043868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.043905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.043923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.043936] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.043945] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.054420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.064016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.064054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.064072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.064082] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.064090] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.074509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.083995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.084030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.084049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.084059] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.084069] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.094645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.103993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.104030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.104048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.104058] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.104067] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.114290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.123987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.124031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.124048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.124058] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.124067] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.134549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.144146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.144182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.144199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.144209] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.144218] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.154452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.164084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.164119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.164135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.164145] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.164154] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.174423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.184109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.691 [2024-07-26 20:57:52.184147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.691 [2024-07-26 20:57:52.184164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.691 [2024-07-26 20:57:52.184174] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.691 [2024-07-26 20:57:52.184182] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.691 [2024-07-26 20:57:52.194383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.691 qpair failed and we were unable to recover it. 00:37:03.691 [2024-07-26 20:57:52.204157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.692 [2024-07-26 20:57:52.204195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.692 [2024-07-26 20:57:52.204211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.692 [2024-07-26 20:57:52.204221] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.692 [2024-07-26 20:57:52.204230] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.692 [2024-07-26 20:57:52.214445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.692 qpair failed and we were unable to recover it. 00:37:03.692 [2024-07-26 20:57:52.224118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.692 [2024-07-26 20:57:52.224162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.692 [2024-07-26 20:57:52.224181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.692 [2024-07-26 20:57:52.224191] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.692 [2024-07-26 20:57:52.224200] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.692 [2024-07-26 20:57:52.234722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.692 qpair failed and we were unable to recover it. 00:37:03.951 [2024-07-26 20:57:52.244330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.951 [2024-07-26 20:57:52.244371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.951 [2024-07-26 20:57:52.244388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.951 [2024-07-26 20:57:52.244397] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.244407] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.254529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.264334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.264372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.264389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.264398] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.264407] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.274609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.284400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.284436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.284453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.284462] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.284472] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.294755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.304402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.304437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.304454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.304463] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.304477] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.314659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.324507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.324546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.324562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.324572] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.324581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.334799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.344473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.344511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.344527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.344537] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.344545] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.354870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.364549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.364586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.364602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.364612] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.364621] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.374908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.384677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.384713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.384730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.384739] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.384748] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.395030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.404713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.404751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.404768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.404777] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.404786] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.415210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.424819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.424859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.424876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.952 [2024-07-26 20:57:52.424886] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.952 [2024-07-26 20:57:52.424897] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.952 [2024-07-26 20:57:52.435254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.952 qpair failed and we were unable to recover it. 00:37:03.952 [2024-07-26 20:57:52.445014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.952 [2024-07-26 20:57:52.445053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.952 [2024-07-26 20:57:52.445069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.953 [2024-07-26 20:57:52.445079] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.953 [2024-07-26 20:57:52.445088] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.953 [2024-07-26 20:57:52.455384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.953 qpair failed and we were unable to recover it. 00:37:03.953 [2024-07-26 20:57:52.464978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.953 [2024-07-26 20:57:52.465013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.953 [2024-07-26 20:57:52.465029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.953 [2024-07-26 20:57:52.465039] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.953 [2024-07-26 20:57:52.465048] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.953 [2024-07-26 20:57:52.475409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.953 qpair failed and we were unable to recover it. 00:37:03.953 [2024-07-26 20:57:52.485028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.953 [2024-07-26 20:57:52.485062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.953 [2024-07-26 20:57:52.485079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.953 [2024-07-26 20:57:52.485092] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.953 [2024-07-26 20:57:52.485102] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:03.953 [2024-07-26 20:57:52.495476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:03.953 qpair failed and we were unable to recover it. 00:37:04.212 [2024-07-26 20:57:52.504958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.212 [2024-07-26 20:57:52.504999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.212 [2024-07-26 20:57:52.505018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.212 [2024-07-26 20:57:52.505027] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.212 [2024-07-26 20:57:52.505036] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.212 [2024-07-26 20:57:52.515408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.212 qpair failed and we were unable to recover it. 00:37:04.212 [2024-07-26 20:57:52.525180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.212 [2024-07-26 20:57:52.525222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.212 [2024-07-26 20:57:52.525239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.212 [2024-07-26 20:57:52.525249] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.212 [2024-07-26 20:57:52.525257] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.212 [2024-07-26 20:57:52.535448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.212 qpair failed and we were unable to recover it. 00:37:04.212 [2024-07-26 20:57:52.545266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.212 [2024-07-26 20:57:52.545309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.545326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.545335] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.545344] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.555442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.565352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.565390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.565407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.565417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.565426] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.575531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.585251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.585293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.585309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.585319] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.585327] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.595608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.605394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.605432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.605448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.605457] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.605466] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.615695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.625383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.625424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.625440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.625450] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.625458] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.635795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.645544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.645580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.645596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.645605] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.645614] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.655839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.665534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.665574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.665594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.665603] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.665612] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.675879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.685639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.685682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.685698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.685707] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.685716] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.695995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.705579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.705617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.213 [2024-07-26 20:57:52.705644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.213 [2024-07-26 20:57:52.705654] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.213 [2024-07-26 20:57:52.705663] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.213 [2024-07-26 20:57:52.715960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.213 qpair failed and we were unable to recover it. 00:37:04.213 [2024-07-26 20:57:52.725601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.213 [2024-07-26 20:57:52.725643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.214 [2024-07-26 20:57:52.725661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.214 [2024-07-26 20:57:52.725671] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.214 [2024-07-26 20:57:52.725681] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.214 [2024-07-26 20:57:52.736057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.214 qpair failed and we were unable to recover it. 00:37:04.214 [2024-07-26 20:57:52.745809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.214 [2024-07-26 20:57:52.745847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.214 [2024-07-26 20:57:52.745863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.214 [2024-07-26 20:57:52.745873] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.214 [2024-07-26 20:57:52.745885] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.214 [2024-07-26 20:57:52.756072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.214 qpair failed and we were unable to recover it. 00:37:04.474 [2024-07-26 20:57:52.765873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.474 [2024-07-26 20:57:52.765912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.474 [2024-07-26 20:57:52.765930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.474 [2024-07-26 20:57:52.765940] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.474 [2024-07-26 20:57:52.765949] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.474 [2024-07-26 20:57:52.776377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.474 qpair failed and we were unable to recover it. 00:37:04.474 [2024-07-26 20:57:52.786019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.474 [2024-07-26 20:57:52.786059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.474 [2024-07-26 20:57:52.786077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.474 [2024-07-26 20:57:52.786087] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.474 [2024-07-26 20:57:52.786096] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.474 [2024-07-26 20:57:52.796459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.474 qpair failed and we were unable to recover it. 00:37:04.474 [2024-07-26 20:57:52.806012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.474 [2024-07-26 20:57:52.806055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.474 [2024-07-26 20:57:52.806071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.474 [2024-07-26 20:57:52.806080] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.474 [2024-07-26 20:57:52.806089] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.474 [2024-07-26 20:57:52.816422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.474 qpair failed and we were unable to recover it. 00:37:04.474 [2024-07-26 20:57:52.825971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.474 [2024-07-26 20:57:52.826009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.474 [2024-07-26 20:57:52.826027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.474 [2024-07-26 20:57:52.826037] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.474 [2024-07-26 20:57:52.826045] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.474 [2024-07-26 20:57:52.836213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.474 qpair failed and we were unable to recover it. 00:37:04.474 [2024-07-26 20:57:52.846237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.474 [2024-07-26 20:57:52.846278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.474 [2024-07-26 20:57:52.846294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.474 [2024-07-26 20:57:52.846304] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.474 [2024-07-26 20:57:52.846313] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.474 [2024-07-26 20:57:52.856539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.474 qpair failed and we were unable to recover it. 00:37:04.474 [2024-07-26 20:57:52.866173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.474 [2024-07-26 20:57:52.866205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.474 [2024-07-26 20:57:52.866222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.474 [2024-07-26 20:57:52.866231] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.474 [2024-07-26 20:57:52.866240] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.474 [2024-07-26 20:57:52.876431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.474 qpair failed and we were unable to recover it. 00:37:04.474 [2024-07-26 20:57:52.886187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.474 [2024-07-26 20:57:52.886220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.474 [2024-07-26 20:57:52.886237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.475 [2024-07-26 20:57:52.886246] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.475 [2024-07-26 20:57:52.886255] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.475 [2024-07-26 20:57:52.896502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.475 qpair failed and we were unable to recover it. 00:37:04.475 [2024-07-26 20:57:52.906329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.475 [2024-07-26 20:57:52.906368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.475 [2024-07-26 20:57:52.906384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.475 [2024-07-26 20:57:52.906393] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.475 [2024-07-26 20:57:52.906402] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.475 [2024-07-26 20:57:52.916649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.475 qpair failed and we were unable to recover it. 00:37:04.475 [2024-07-26 20:57:52.926402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.475 [2024-07-26 20:57:52.926448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.475 [2024-07-26 20:57:52.926465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.475 [2024-07-26 20:57:52.926478] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.475 [2024-07-26 20:57:52.926487] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.475 [2024-07-26 20:57:52.936788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.475 qpair failed and we were unable to recover it. 00:37:04.475 [2024-07-26 20:57:52.946366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.475 [2024-07-26 20:57:52.946402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.475 [2024-07-26 20:57:52.946418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.475 [2024-07-26 20:57:52.946428] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.475 [2024-07-26 20:57:52.946437] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.475 [2024-07-26 20:57:52.956777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.475 qpair failed and we were unable to recover it. 00:37:04.475 [2024-07-26 20:57:52.966433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.475 [2024-07-26 20:57:52.966469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.475 [2024-07-26 20:57:52.966485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.475 [2024-07-26 20:57:52.966494] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.475 [2024-07-26 20:57:52.966503] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.475 [2024-07-26 20:57:52.976856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.475 qpair failed and we were unable to recover it. 00:37:04.475 [2024-07-26 20:57:52.986533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.475 [2024-07-26 20:57:52.986571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.475 [2024-07-26 20:57:52.986587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.475 [2024-07-26 20:57:52.986597] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.475 [2024-07-26 20:57:52.986606] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.475 [2024-07-26 20:57:52.996630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.475 qpair failed and we were unable to recover it. 00:37:04.475 [2024-07-26 20:57:53.006527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.475 [2024-07-26 20:57:53.006572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.475 [2024-07-26 20:57:53.006588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.475 [2024-07-26 20:57:53.006598] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.475 [2024-07-26 20:57:53.006606] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.475 [2024-07-26 20:57:53.016886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.475 qpair failed and we were unable to recover it. 00:37:04.735 [2024-07-26 20:57:53.026536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.735 [2024-07-26 20:57:53.026575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.735 [2024-07-26 20:57:53.026592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.735 [2024-07-26 20:57:53.026601] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.735 [2024-07-26 20:57:53.026610] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.735 [2024-07-26 20:57:53.037016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.735 qpair failed and we were unable to recover it. 00:37:04.735 [2024-07-26 20:57:53.046622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.735 [2024-07-26 20:57:53.046661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.735 [2024-07-26 20:57:53.046676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.735 [2024-07-26 20:57:53.046686] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.735 [2024-07-26 20:57:53.046695] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.735 [2024-07-26 20:57:53.056961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.735 qpair failed and we were unable to recover it. 00:37:04.735 [2024-07-26 20:57:53.066773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.735 [2024-07-26 20:57:53.066809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.735 [2024-07-26 20:57:53.066827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.735 [2024-07-26 20:57:53.066837] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.735 [2024-07-26 20:57:53.066846] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.735 [2024-07-26 20:57:53.077102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.735 qpair failed and we were unable to recover it. 00:37:04.735 [2024-07-26 20:57:53.086762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.735 [2024-07-26 20:57:53.086804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.735 [2024-07-26 20:57:53.086820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.735 [2024-07-26 20:57:53.086830] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.735 [2024-07-26 20:57:53.086838] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.735 [2024-07-26 20:57:53.097154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.735 qpair failed and we were unable to recover it. 00:37:04.735 [2024-07-26 20:57:53.106802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.735 [2024-07-26 20:57:53.106835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.735 [2024-07-26 20:57:53.106854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.735 [2024-07-26 20:57:53.106864] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.735 [2024-07-26 20:57:53.106873] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.735 [2024-07-26 20:57:53.117266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.735 qpair failed and we were unable to recover it. 00:37:04.735 [2024-07-26 20:57:53.126895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.735 [2024-07-26 20:57:53.126933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.735 [2024-07-26 20:57:53.126949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.735 [2024-07-26 20:57:53.126959] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.735 [2024-07-26 20:57:53.126967] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.736 [2024-07-26 20:57:53.137289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.736 qpair failed and we were unable to recover it. 00:37:04.736 [2024-07-26 20:57:53.147022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.736 [2024-07-26 20:57:53.147060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.736 [2024-07-26 20:57:53.147076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.736 [2024-07-26 20:57:53.147085] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.736 [2024-07-26 20:57:53.147094] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.736 [2024-07-26 20:57:53.157264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.736 qpair failed and we were unable to recover it. 00:37:04.736 [2024-07-26 20:57:53.166991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.736 [2024-07-26 20:57:53.167029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.736 [2024-07-26 20:57:53.167045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.736 [2024-07-26 20:57:53.167054] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.736 [2024-07-26 20:57:53.167063] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.736 [2024-07-26 20:57:53.177366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.736 qpair failed and we were unable to recover it. 00:37:04.736 [2024-07-26 20:57:53.187067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.736 [2024-07-26 20:57:53.187106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.736 [2024-07-26 20:57:53.187122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.736 [2024-07-26 20:57:53.187131] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.736 [2024-07-26 20:57:53.187144] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.736 [2024-07-26 20:57:53.197427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.736 qpair failed and we were unable to recover it. 00:37:04.736 [2024-07-26 20:57:53.207151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.736 [2024-07-26 20:57:53.207191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.736 [2024-07-26 20:57:53.207207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.736 [2024-07-26 20:57:53.207217] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.736 [2024-07-26 20:57:53.207226] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.736 [2024-07-26 20:57:53.217552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.736 qpair failed and we were unable to recover it. 00:37:04.736 [2024-07-26 20:57:53.227101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.736 [2024-07-26 20:57:53.227140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.736 [2024-07-26 20:57:53.227156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.736 [2024-07-26 20:57:53.227166] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.736 [2024-07-26 20:57:53.227175] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.736 [2024-07-26 20:57:53.237458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.736 qpair failed and we were unable to recover it. 00:37:04.736 [2024-07-26 20:57:53.247235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.736 [2024-07-26 20:57:53.247274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.736 [2024-07-26 20:57:53.247289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.736 [2024-07-26 20:57:53.247300] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.736 [2024-07-26 20:57:53.247309] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.736 [2024-07-26 20:57:53.257706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.736 qpair failed and we were unable to recover it. 00:37:04.736 [2024-07-26 20:57:53.267162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.736 [2024-07-26 20:57:53.267201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.736 [2024-07-26 20:57:53.267223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.736 [2024-07-26 20:57:53.267233] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.736 [2024-07-26 20:57:53.267243] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.736 [2024-07-26 20:57:53.277651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.736 qpair failed and we were unable to recover it. 00:37:04.995 [2024-07-26 20:57:53.287322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.995 [2024-07-26 20:57:53.287358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.995 [2024-07-26 20:57:53.287375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.995 [2024-07-26 20:57:53.287385] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.995 [2024-07-26 20:57:53.287394] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.995 [2024-07-26 20:57:53.297760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.995 qpair failed and we were unable to recover it. 00:37:04.995 [2024-07-26 20:57:53.307326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.995 [2024-07-26 20:57:53.307365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.995 [2024-07-26 20:57:53.307381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.995 [2024-07-26 20:57:53.307391] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.995 [2024-07-26 20:57:53.307400] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.995 [2024-07-26 20:57:53.317715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.995 qpair failed and we were unable to recover it. 00:37:04.995 [2024-07-26 20:57:53.327471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.995 [2024-07-26 20:57:53.327509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.995 [2024-07-26 20:57:53.327526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.995 [2024-07-26 20:57:53.327536] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.995 [2024-07-26 20:57:53.327545] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.995 [2024-07-26 20:57:53.338041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.995 qpair failed and we were unable to recover it. 00:37:04.995 [2024-07-26 20:57:53.347462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.995 [2024-07-26 20:57:53.347499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.995 [2024-07-26 20:57:53.347515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.995 [2024-07-26 20:57:53.347525] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.995 [2024-07-26 20:57:53.347534] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.995 [2024-07-26 20:57:53.357887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.995 qpair failed and we were unable to recover it. 00:37:04.995 [2024-07-26 20:57:53.367579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.995 [2024-07-26 20:57:53.367618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.995 [2024-07-26 20:57:53.367639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.995 [2024-07-26 20:57:53.367652] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.995 [2024-07-26 20:57:53.367661] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.995 [2024-07-26 20:57:53.378168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.995 qpair failed and we were unable to recover it. 00:37:04.995 [2024-07-26 20:57:53.387614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.995 [2024-07-26 20:57:53.387657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.995 [2024-07-26 20:57:53.387673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.995 [2024-07-26 20:57:53.387683] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.995 [2024-07-26 20:57:53.387692] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.995 [2024-07-26 20:57:53.398072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.995 qpair failed and we were unable to recover it. 00:37:04.995 [2024-07-26 20:57:53.407702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.995 [2024-07-26 20:57:53.407747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.995 [2024-07-26 20:57:53.407763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.995 [2024-07-26 20:57:53.407772] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.995 [2024-07-26 20:57:53.407781] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.995 [2024-07-26 20:57:53.418155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.995 qpair failed and we were unable to recover it. 00:37:04.995 [2024-07-26 20:57:53.427764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.996 [2024-07-26 20:57:53.427805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.996 [2024-07-26 20:57:53.427822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.996 [2024-07-26 20:57:53.427832] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.996 [2024-07-26 20:57:53.427840] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.996 [2024-07-26 20:57:53.438100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.996 qpair failed and we were unable to recover it. 00:37:04.996 [2024-07-26 20:57:53.447871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.996 [2024-07-26 20:57:53.447913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.996 [2024-07-26 20:57:53.447929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.996 [2024-07-26 20:57:53.447939] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.996 [2024-07-26 20:57:53.447947] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.996 [2024-07-26 20:57:53.458234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.996 qpair failed and we were unable to recover it. 00:37:04.996 [2024-07-26 20:57:53.467834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.996 [2024-07-26 20:57:53.467871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.996 [2024-07-26 20:57:53.467887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.996 [2024-07-26 20:57:53.467896] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.996 [2024-07-26 20:57:53.467905] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.996 [2024-07-26 20:57:53.478164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.996 qpair failed and we were unable to recover it. 00:37:04.996 [2024-07-26 20:57:53.487987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.996 [2024-07-26 20:57:53.488035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.996 [2024-07-26 20:57:53.488051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.996 [2024-07-26 20:57:53.488060] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.996 [2024-07-26 20:57:53.488070] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.996 [2024-07-26 20:57:53.498408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.996 qpair failed and we were unable to recover it. 00:37:04.996 [2024-07-26 20:57:53.507882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.996 [2024-07-26 20:57:53.507918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.996 [2024-07-26 20:57:53.507934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.996 [2024-07-26 20:57:53.507943] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.996 [2024-07-26 20:57:53.507952] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.996 [2024-07-26 20:57:53.518597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.996 qpair failed and we were unable to recover it. 00:37:04.996 [2024-07-26 20:57:53.528068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.996 [2024-07-26 20:57:53.528101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.996 [2024-07-26 20:57:53.528118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.996 [2024-07-26 20:57:53.528128] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.996 [2024-07-26 20:57:53.528136] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:04.996 [2024-07-26 20:57:53.538636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:04.996 qpair failed and we were unable to recover it. 00:37:05.255 [2024-07-26 20:57:53.548153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.255 [2024-07-26 20:57:53.548192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.255 [2024-07-26 20:57:53.548212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.255 [2024-07-26 20:57:53.548222] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.255 [2024-07-26 20:57:53.548230] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.255 [2024-07-26 20:57:53.558474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.255 qpair failed and we were unable to recover it. 00:37:05.255 [2024-07-26 20:57:53.568214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.255 [2024-07-26 20:57:53.568257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.255 [2024-07-26 20:57:53.568273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.255 [2024-07-26 20:57:53.568283] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.255 [2024-07-26 20:57:53.568292] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.255 [2024-07-26 20:57:53.578659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.255 qpair failed and we were unable to recover it. 00:37:05.255 [2024-07-26 20:57:53.588168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.255 [2024-07-26 20:57:53.588209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.255 [2024-07-26 20:57:53.588225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.255 [2024-07-26 20:57:53.588234] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.255 [2024-07-26 20:57:53.588243] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.255 [2024-07-26 20:57:53.598407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.255 qpair failed and we were unable to recover it. 00:37:05.255 [2024-07-26 20:57:53.608304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.255 [2024-07-26 20:57:53.608343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.255 [2024-07-26 20:57:53.608360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.255 [2024-07-26 20:57:53.608369] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.255 [2024-07-26 20:57:53.608378] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.255 [2024-07-26 20:57:53.618811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.255 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.628343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.628380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.628396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.628406] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.628419] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.638753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.648431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.648467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.648483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.648493] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.648502] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.658834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.668393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.668436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.668452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.668461] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.668470] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.678897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.688505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.688546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.688562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.688572] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.688581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.699012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.708495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.708534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.708549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.708559] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.708568] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.719012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.728710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.728754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.728770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.728780] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.728788] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.738951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.748707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.748747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.748764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.748774] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.748783] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.759031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.768764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.768805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.768821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.768830] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.768839] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.779092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.256 [2024-07-26 20:57:53.788771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.256 [2024-07-26 20:57:53.788809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.256 [2024-07-26 20:57:53.788825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.256 [2024-07-26 20:57:53.788835] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.256 [2024-07-26 20:57:53.788844] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.256 [2024-07-26 20:57:53.799186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.256 qpair failed and we were unable to recover it. 00:37:05.515 [2024-07-26 20:57:53.808840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.515 [2024-07-26 20:57:53.808883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.515 [2024-07-26 20:57:53.808902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.515 [2024-07-26 20:57:53.808915] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.515 [2024-07-26 20:57:53.808924] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.515 [2024-07-26 20:57:53.819213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.515 qpair failed and we were unable to recover it. 00:37:05.515 [2024-07-26 20:57:53.828946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.515 [2024-07-26 20:57:53.828984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.515 [2024-07-26 20:57:53.829001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.515 [2024-07-26 20:57:53.829011] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.515 [2024-07-26 20:57:53.829020] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.515 [2024-07-26 20:57:53.839314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.515 qpair failed and we were unable to recover it. 00:37:05.515 [2024-07-26 20:57:53.849016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.515 [2024-07-26 20:57:53.849051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.515 [2024-07-26 20:57:53.849068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.515 [2024-07-26 20:57:53.849077] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.515 [2024-07-26 20:57:53.849085] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.515 [2024-07-26 20:57:53.859385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.515 qpair failed and we were unable to recover it. 00:37:05.515 [2024-07-26 20:57:53.869120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.515 [2024-07-26 20:57:53.869158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.515 [2024-07-26 20:57:53.869174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.515 [2024-07-26 20:57:53.869183] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.515 [2024-07-26 20:57:53.869192] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.515 [2024-07-26 20:57:53.879367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.515 qpair failed and we were unable to recover it. 00:37:05.515 [2024-07-26 20:57:53.889081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.515 [2024-07-26 20:57:53.889121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.515 [2024-07-26 20:57:53.889137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.515 [2024-07-26 20:57:53.889147] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.515 [2024-07-26 20:57:53.889156] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.515 [2024-07-26 20:57:53.899471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.515 qpair failed and we were unable to recover it. 00:37:05.515 [2024-07-26 20:57:53.909123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.515 [2024-07-26 20:57:53.909166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.515 [2024-07-26 20:57:53.909182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.515 [2024-07-26 20:57:53.909192] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.515 [2024-07-26 20:57:53.909201] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.515 [2024-07-26 20:57:53.919449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.515 qpair failed and we were unable to recover it. 00:37:05.515 [2024-07-26 20:57:53.929205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.515 [2024-07-26 20:57:53.929248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.515 [2024-07-26 20:57:53.929264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.515 [2024-07-26 20:57:53.929273] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.515 [2024-07-26 20:57:53.929283] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.515 [2024-07-26 20:57:53.939774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.515 qpair failed and we were unable to recover it. 00:37:05.515 [2024-07-26 20:57:53.949344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.515 [2024-07-26 20:57:53.949385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.515 [2024-07-26 20:57:53.949401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.516 [2024-07-26 20:57:53.949411] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.516 [2024-07-26 20:57:53.949420] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.516 [2024-07-26 20:57:53.959602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.516 qpair failed and we were unable to recover it. 00:37:05.516 [2024-07-26 20:57:53.969400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.516 [2024-07-26 20:57:53.969438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.516 [2024-07-26 20:57:53.969453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.516 [2024-07-26 20:57:53.969463] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.516 [2024-07-26 20:57:53.969472] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.516 [2024-07-26 20:57:53.979743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.516 qpair failed and we were unable to recover it. 00:37:05.516 [2024-07-26 20:57:53.989336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.516 [2024-07-26 20:57:53.989376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.516 [2024-07-26 20:57:53.989396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.516 [2024-07-26 20:57:53.989405] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.516 [2024-07-26 20:57:53.989415] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.516 [2024-07-26 20:57:53.999715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.516 qpair failed and we were unable to recover it. 00:37:05.516 [2024-07-26 20:57:54.009514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.516 [2024-07-26 20:57:54.009557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.516 [2024-07-26 20:57:54.009574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.516 [2024-07-26 20:57:54.009584] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.516 [2024-07-26 20:57:54.009593] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.516 [2024-07-26 20:57:54.019906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.516 qpair failed and we were unable to recover it. 00:37:05.516 [2024-07-26 20:57:54.029477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.516 [2024-07-26 20:57:54.029513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.516 [2024-07-26 20:57:54.029530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.516 [2024-07-26 20:57:54.029540] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.516 [2024-07-26 20:57:54.029549] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.516 [2024-07-26 20:57:54.039971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.516 qpair failed and we were unable to recover it. 00:37:05.516 [2024-07-26 20:57:54.049636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.516 [2024-07-26 20:57:54.049674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.516 [2024-07-26 20:57:54.049691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.516 [2024-07-26 20:57:54.049701] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.516 [2024-07-26 20:57:54.049709] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.516 [2024-07-26 20:57:54.060130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.516 qpair failed and we were unable to recover it. 00:37:05.775 [2024-07-26 20:57:54.069586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.775 [2024-07-26 20:57:54.069636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.775 [2024-07-26 20:57:54.069654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.775 [2024-07-26 20:57:54.069664] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.775 [2024-07-26 20:57:54.069676] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.775 [2024-07-26 20:57:54.080122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.775 qpair failed and we were unable to recover it. 00:37:05.775 [2024-07-26 20:57:54.089684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.775 [2024-07-26 20:57:54.089726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.775 [2024-07-26 20:57:54.089742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.775 [2024-07-26 20:57:54.089752] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.775 [2024-07-26 20:57:54.089760] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.775 [2024-07-26 20:57:54.100124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.775 qpair failed and we were unable to recover it. 00:37:05.775 [2024-07-26 20:57:54.109824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.775 [2024-07-26 20:57:54.109869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.775 [2024-07-26 20:57:54.109886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.775 [2024-07-26 20:57:54.109895] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.775 [2024-07-26 20:57:54.109904] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.775 [2024-07-26 20:57:54.120238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.775 qpair failed and we were unable to recover it. 00:37:05.775 [2024-07-26 20:57:54.129839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.775 [2024-07-26 20:57:54.129883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.775 [2024-07-26 20:57:54.129899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.775 [2024-07-26 20:57:54.129909] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.775 [2024-07-26 20:57:54.129918] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.775 [2024-07-26 20:57:54.140317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.775 qpair failed and we were unable to recover it. 00:37:05.775 [2024-07-26 20:57:54.149919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.775 [2024-07-26 20:57:54.149956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.775 [2024-07-26 20:57:54.149972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.775 [2024-07-26 20:57:54.149981] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.775 [2024-07-26 20:57:54.149990] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.775 [2024-07-26 20:57:54.160144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.775 qpair failed and we were unable to recover it. 00:37:05.775 [2024-07-26 20:57:54.169977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.775 [2024-07-26 20:57:54.170015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.775 [2024-07-26 20:57:54.170031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.775 [2024-07-26 20:57:54.170040] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.775 [2024-07-26 20:57:54.170049] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.775 [2024-07-26 20:57:54.180351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.775 qpair failed and we were unable to recover it. 00:37:05.775 [2024-07-26 20:57:54.190008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.775 [2024-07-26 20:57:54.190047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.775 [2024-07-26 20:57:54.190063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.775 [2024-07-26 20:57:54.190073] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.775 [2024-07-26 20:57:54.190082] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.775 [2024-07-26 20:57:54.200551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.775 qpair failed and we were unable to recover it. 00:37:05.775 [2024-07-26 20:57:54.210055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.775 [2024-07-26 20:57:54.210093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.776 [2024-07-26 20:57:54.210109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.776 [2024-07-26 20:57:54.210118] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.776 [2024-07-26 20:57:54.210127] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.776 [2024-07-26 20:57:54.220658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.776 qpair failed and we were unable to recover it. 00:37:05.776 [2024-07-26 20:57:54.230214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.776 [2024-07-26 20:57:54.230250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.776 [2024-07-26 20:57:54.230266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.776 [2024-07-26 20:57:54.230276] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.776 [2024-07-26 20:57:54.230285] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.776 [2024-07-26 20:57:54.240450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.776 qpair failed and we were unable to recover it. 00:37:05.776 [2024-07-26 20:57:54.250270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.776 [2024-07-26 20:57:54.250304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.776 [2024-07-26 20:57:54.250320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.776 [2024-07-26 20:57:54.250333] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.776 [2024-07-26 20:57:54.250342] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.776 [2024-07-26 20:57:54.260664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.776 qpair failed and we were unable to recover it. 00:37:05.776 [2024-07-26 20:57:54.270311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.776 [2024-07-26 20:57:54.270349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.776 [2024-07-26 20:57:54.270365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.776 [2024-07-26 20:57:54.270375] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.776 [2024-07-26 20:57:54.270384] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.776 [2024-07-26 20:57:54.280652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.776 qpair failed and we were unable to recover it. 00:37:05.776 [2024-07-26 20:57:54.290435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.776 [2024-07-26 20:57:54.290480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.776 [2024-07-26 20:57:54.290496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.776 [2024-07-26 20:57:54.290505] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.776 [2024-07-26 20:57:54.290514] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.776 [2024-07-26 20:57:54.300582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.776 qpair failed and we were unable to recover it. 00:37:05.776 [2024-07-26 20:57:54.310421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.776 [2024-07-26 20:57:54.310461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.776 [2024-07-26 20:57:54.310477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.776 [2024-07-26 20:57:54.310487] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.776 [2024-07-26 20:57:54.310496] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:05.776 [2024-07-26 20:57:54.320783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:05.776 qpair failed and we were unable to recover it. 00:37:06.035 [2024-07-26 20:57:54.330514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.035 [2024-07-26 20:57:54.330553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.035 [2024-07-26 20:57:54.330570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.035 [2024-07-26 20:57:54.330580] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.035 [2024-07-26 20:57:54.330588] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.035 [2024-07-26 20:57:54.340971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.035 qpair failed and we were unable to recover it. 00:37:06.035 [2024-07-26 20:57:54.350535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.035 [2024-07-26 20:57:54.350574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.035 [2024-07-26 20:57:54.350590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.035 [2024-07-26 20:57:54.350599] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.035 [2024-07-26 20:57:54.350608] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.035 [2024-07-26 20:57:54.360721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.035 qpair failed and we were unable to recover it. 00:37:06.035 [2024-07-26 20:57:54.370465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.035 [2024-07-26 20:57:54.370508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.035 [2024-07-26 20:57:54.370525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.035 [2024-07-26 20:57:54.370534] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.035 [2024-07-26 20:57:54.370543] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.035 [2024-07-26 20:57:54.380939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.035 qpair failed and we were unable to recover it. 00:37:06.035 [2024-07-26 20:57:54.390703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.035 [2024-07-26 20:57:54.390745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.035 [2024-07-26 20:57:54.390761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.035 [2024-07-26 20:57:54.390771] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.035 [2024-07-26 20:57:54.390781] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.035 [2024-07-26 20:57:54.400882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.035 qpair failed and we were unable to recover it. 00:37:06.035 [2024-07-26 20:57:54.410596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.035 [2024-07-26 20:57:54.410640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.035 [2024-07-26 20:57:54.410656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.035 [2024-07-26 20:57:54.410666] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.035 [2024-07-26 20:57:54.410675] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.035 [2024-07-26 20:57:54.420970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.035 qpair failed and we were unable to recover it. 00:37:06.036 [2024-07-26 20:57:54.430839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.036 [2024-07-26 20:57:54.430882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.036 [2024-07-26 20:57:54.430903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.036 [2024-07-26 20:57:54.430912] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.036 [2024-07-26 20:57:54.430921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.036 [2024-07-26 20:57:54.441156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.036 qpair failed and we were unable to recover it. 00:37:06.036 [2024-07-26 20:57:54.450770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.036 [2024-07-26 20:57:54.450811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.036 [2024-07-26 20:57:54.450827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.036 [2024-07-26 20:57:54.450837] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.036 [2024-07-26 20:57:54.450846] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.036 [2024-07-26 20:57:54.461363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.036 qpair failed and we were unable to recover it. 00:37:06.036 [2024-07-26 20:57:54.470917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.036 [2024-07-26 20:57:54.470955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.036 [2024-07-26 20:57:54.470973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.036 [2024-07-26 20:57:54.470983] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.036 [2024-07-26 20:57:54.470992] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.036 [2024-07-26 20:57:54.481173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.036 qpair failed and we were unable to recover it. 00:37:06.036 [2024-07-26 20:57:54.490973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.036 [2024-07-26 20:57:54.491014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.036 [2024-07-26 20:57:54.491030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.036 [2024-07-26 20:57:54.491040] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.036 [2024-07-26 20:57:54.491050] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.036 [2024-07-26 20:57:54.501357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.036 qpair failed and we were unable to recover it. 00:37:06.036 [2024-07-26 20:57:54.510926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.036 [2024-07-26 20:57:54.510967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.036 [2024-07-26 20:57:54.510984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.036 [2024-07-26 20:57:54.510993] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.036 [2024-07-26 20:57:54.511005] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.036 [2024-07-26 20:57:54.521265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.036 qpair failed and we were unable to recover it. 00:37:06.036 [2024-07-26 20:57:54.531116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.036 [2024-07-26 20:57:54.531152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.036 [2024-07-26 20:57:54.531169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.036 [2024-07-26 20:57:54.531178] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.036 [2024-07-26 20:57:54.531187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.036 [2024-07-26 20:57:54.541571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.036 qpair failed and we were unable to recover it. 00:37:06.036 [2024-07-26 20:57:54.551038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.036 [2024-07-26 20:57:54.551073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.036 [2024-07-26 20:57:54.551089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.036 [2024-07-26 20:57:54.551099] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.036 [2024-07-26 20:57:54.551107] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.036 [2024-07-26 20:57:54.561492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.036 qpair failed and we were unable to recover it. 00:37:06.036 [2024-07-26 20:57:54.571225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.036 [2024-07-26 20:57:54.571264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.036 [2024-07-26 20:57:54.571280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.036 [2024-07-26 20:57:54.571290] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.036 [2024-07-26 20:57:54.571299] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.036 [2024-07-26 20:57:54.581527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.036 qpair failed and we were unable to recover it. 00:37:06.294 [2024-07-26 20:57:54.591346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.294 [2024-07-26 20:57:54.591384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.294 [2024-07-26 20:57:54.591400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.294 [2024-07-26 20:57:54.591409] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.294 [2024-07-26 20:57:54.591418] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.294 [2024-07-26 20:57:54.601588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.294 qpair failed and we were unable to recover it. 00:37:06.294 [2024-07-26 20:57:54.611259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.294 [2024-07-26 20:57:54.611301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.294 [2024-07-26 20:57:54.611317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.294 [2024-07-26 20:57:54.611327] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.294 [2024-07-26 20:57:54.611336] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.294 [2024-07-26 20:57:54.621669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.294 qpair failed and we were unable to recover it. 00:37:06.294 [2024-07-26 20:57:54.631298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.294 [2024-07-26 20:57:54.631335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.294 [2024-07-26 20:57:54.631352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.294 [2024-07-26 20:57:54.631361] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.294 [2024-07-26 20:57:54.631370] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:06.294 [2024-07-26 20:57:54.641841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:06.294 qpair failed and we were unable to recover it. 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Write completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 Read completed with error (sct=0, sc=8) 00:37:07.231 starting I/O failed 00:37:07.231 [2024-07-26 20:57:55.646852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:07.231 [2024-07-26 20:57:55.653967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.231 [2024-07-26 20:57:55.654015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.231 [2024-07-26 20:57:55.654034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.231 [2024-07-26 20:57:55.654044] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.231 [2024-07-26 20:57:55.654053] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:37:07.231 [2024-07-26 20:57:55.664756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:07.231 qpair failed and we were unable to recover it. 00:37:07.231 [2024-07-26 20:57:55.674323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.231 [2024-07-26 20:57:55.674363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.231 [2024-07-26 20:57:55.674380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.231 [2024-07-26 20:57:55.674390] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.231 [2024-07-26 20:57:55.674399] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:37:07.231 [2024-07-26 20:57:55.684746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:07.231 qpair failed and we were unable to recover it. 00:37:07.231 [2024-07-26 20:57:55.684876] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:37:07.231 A controller has encountered a failure and is being reset. 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Read completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 Write completed with error (sct=0, sc=8) 00:37:08.169 starting I/O failed 00:37:08.169 [2024-07-26 20:57:56.689897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.169 [2024-07-26 20:57:56.697146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.169 [2024-07-26 20:57:56.697194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.169 [2024-07-26 20:57:56.697222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.169 [2024-07-26 20:57:56.697238] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.169 [2024-07-26 20:57:56.697251] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:08.169 [2024-07-26 20:57:56.707848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:08.169 qpair failed and we were unable to recover it. 00:37:08.169 [2024-07-26 20:57:56.717491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.169 [2024-07-26 20:57:56.717528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.169 [2024-07-26 20:57:56.717544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.169 [2024-07-26 20:57:56.717554] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.169 [2024-07-26 20:57:56.717562] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:37:08.428 [2024-07-26 20:57:56.728090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:08.428 qpair failed and we were unable to recover it. 00:37:08.429 [2024-07-26 20:57:56.728249] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:37:08.429 [2024-07-26 20:57:56.760412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:08.429 Controller properly reset. 00:37:08.429 Initializing NVMe Controllers 00:37:08.429 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:08.429 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:08.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:08.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:08.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:08.429 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:08.429 Initialization complete. Launching workers. 00:37:08.429 Starting thread on core 1 00:37:08.429 Starting thread on core 2 00:37:08.429 Starting thread on core 3 00:37:08.429 Starting thread on core 0 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:08.429 00:37:08.429 real 0m13.553s 00:37:08.429 user 0m28.859s 00:37:08.429 sys 0m3.584s 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:08.429 ************************************ 00:37:08.429 END TEST nvmf_target_disconnect_tc2 00:37:08.429 ************************************ 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:08.429 ************************************ 00:37:08.429 START TEST nvmf_target_disconnect_tc3 00:37:08.429 ************************************ 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1359192 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:37:08.429 20:57:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:37:08.688 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.593 20:57:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1357817 00:37:10.593 20:57:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Write completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 Read completed with error (sct=0, sc=8) 00:37:11.994 starting I/O failed 00:37:11.994 [2024-07-26 20:58:00.114443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:12.575 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1357817 Killed "${NVMF_APP[@]}" "$@" 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1359984 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1359984 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1359984 ']' 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:12.575 20:58:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:12.575 [2024-07-26 20:58:00.979771] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:37:12.575 [2024-07-26 20:58:00.979827] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.575 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.575 [2024-07-26 20:58:01.082326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Read completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 Write completed with error (sct=0, sc=8) 00:37:12.575 starting I/O failed 00:37:12.575 [2024-07-26 20:58:01.119515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:12.575 [2024-07-26 20:58:01.120792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.575 [2024-07-26 20:58:01.120828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.575 [2024-07-26 20:58:01.120842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.575 [2024-07-26 20:58:01.120851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.575 [2024-07-26 20:58:01.120859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.575 [2024-07-26 20:58:01.120983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:12.575 [2024-07-26 20:58:01.121093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:12.575 [2024-07-26 20:58:01.121203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:12.575 [2024-07-26 20:58:01.121204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:13.512 Malloc0 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.512 20:58:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:13.512 [2024-07-26 20:58:01.877283] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1beb020/0x1bf7530) succeed. 00:37:13.512 [2024-07-26 20:58:01.887712] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bec660/0x1c775d0) succeed. 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.512 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:13.513 [2024-07-26 20:58:02.026803] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:37:13.513 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.513 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:37:13.513 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.513 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:13.513 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.513 20:58:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1359192 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Write completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 Read completed with error (sct=0, sc=8) 00:37:13.772 starting I/O failed 00:37:13.772 [2024-07-26 20:58:02.124713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:13.772 [2024-07-26 20:58:02.126268] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:13.772 [2024-07-26 20:58:02.126288] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:13.772 [2024-07-26 20:58:02.126297] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:14.708 [2024-07-26 20:58:03.130075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:14.708 qpair failed and we were unable to recover it. 00:37:14.708 [2024-07-26 20:58:03.131528] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:14.709 [2024-07-26 20:58:03.131545] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:14.709 [2024-07-26 20:58:03.131554] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:15.643 [2024-07-26 20:58:04.135251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:15.643 qpair failed and we were unable to recover it. 00:37:15.643 [2024-07-26 20:58:04.136682] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:15.643 [2024-07-26 20:58:04.136699] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:15.643 [2024-07-26 20:58:04.136707] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:17.018 [2024-07-26 20:58:05.140619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:17.018 qpair failed and we were unable to recover it. 00:37:17.018 [2024-07-26 20:58:05.142045] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:17.018 [2024-07-26 20:58:05.142062] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:17.018 [2024-07-26 20:58:05.142071] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:17.951 [2024-07-26 20:58:06.145910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:17.951 qpair failed and we were unable to recover it. 00:37:17.951 [2024-07-26 20:58:06.147283] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:17.951 [2024-07-26 20:58:06.147301] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:17.951 [2024-07-26 20:58:06.147309] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:18.887 [2024-07-26 20:58:07.151030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.887 qpair failed and we were unable to recover it. 00:37:18.887 [2024-07-26 20:58:07.152560] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:18.887 [2024-07-26 20:58:07.152576] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:18.887 [2024-07-26 20:58:07.152584] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:19.824 [2024-07-26 20:58:08.156464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.824 qpair failed and we were unable to recover it. 00:37:19.824 [2024-07-26 20:58:08.157873] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:19.824 [2024-07-26 20:58:08.157891] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:19.824 [2024-07-26 20:58:08.157899] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:20.760 [2024-07-26 20:58:09.161753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.760 qpair failed and we were unable to recover it. 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Read completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Read completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Read completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Read completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Read completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Read completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Read completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Write completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.697 Read completed with error (sct=0, sc=8) 00:37:21.697 starting I/O failed 00:37:21.698 Write completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Read completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Write completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Write completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Write completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Write completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Read completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Write completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Read completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Read completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 Write completed with error (sct=0, sc=8) 00:37:21.698 starting I/O failed 00:37:21.698 [2024-07-26 20:58:10.166781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:21.698 [2024-07-26 20:58:10.166799] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:37:21.698 A controller has encountered a failure and is being reset. 00:37:21.698 Resorting to new failover address 192.168.100.9 00:37:21.698 [2024-07-26 20:58:10.166879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:21.698 [2024-07-26 20:58:10.166929] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:37:21.698 [2024-07-26 20:58:10.196584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:21.698 Controller properly reset. 00:37:21.957 Initializing NVMe Controllers 00:37:21.957 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:21.957 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:21.957 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:21.957 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:21.957 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:21.957 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:21.957 Initialization complete. Launching workers. 00:37:21.957 Starting thread on core 1 00:37:21.957 Starting thread on core 2 00:37:21.957 Starting thread on core 3 00:37:21.957 Starting thread on core 0 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:37:21.957 00:37:21.957 real 0m13.380s 00:37:21.957 user 0m56.114s 00:37:21.957 sys 0m4.253s 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:21.957 ************************************ 00:37:21.957 END TEST nvmf_target_disconnect_tc3 00:37:21.957 ************************************ 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:21.957 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:37:21.957 rmmod nvme_rdma 00:37:21.957 rmmod nvme_fabrics 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1359984 ']' 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1359984 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1359984 ']' 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1359984 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1359984 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1359984' 00:37:21.958 killing process with pid 1359984 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1359984 00:37:21.958 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1359984 00:37:22.217 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:22.217 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:37:22.217 00:37:22.217 real 0m36.516s 00:37:22.217 user 2m5.874s 00:37:22.217 sys 0m14.493s 00:37:22.217 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.217 20:58:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:22.217 ************************************ 00:37:22.217 END TEST nvmf_target_disconnect 00:37:22.217 ************************************ 00:37:22.217 20:58:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:22.217 00:37:22.217 real 7m29.106s 00:37:22.217 user 20m32.633s 00:37:22.217 sys 1m56.149s 00:37:22.217 20:58:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.217 20:58:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.217 ************************************ 00:37:22.217 END TEST nvmf_host 00:37:22.217 ************************************ 00:37:22.476 00:37:22.476 real 29m22.011s 00:37:22.476 user 81m50.557s 00:37:22.476 sys 7m15.897s 00:37:22.476 20:58:10 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.476 20:58:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:22.476 ************************************ 00:37:22.476 END TEST nvmf_rdma 00:37:22.476 ************************************ 00:37:22.476 20:58:10 -- spdk/autotest.sh@291 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:37:22.476 20:58:10 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:22.476 20:58:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:22.476 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:37:22.476 ************************************ 00:37:22.476 START TEST spdkcli_nvmf_rdma 00:37:22.476 ************************************ 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:37:22.476 * Looking for test storage... 00:37:22.476 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.476 20:58:10 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:22.477 20:58:10 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1361696 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1361696 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 1361696 ']' 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.477 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:22.736 [2024-07-26 20:58:11.059473] Starting SPDK v24.09-pre git sha1 cac68eec0 / DPDK 23.11.0 initialization... 00:37:22.736 [2024-07-26 20:58:11.059527] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1361696 ] 00:37:22.736 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.736 [2024-07-26 20:58:11.144294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:22.736 [2024-07-26 20:58:11.185558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.736 [2024-07-26 20:58:11.185561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:37:23.672 20:58:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:37:31.781 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:37:31.781 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:31.781 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:37:31.782 Found net devices under 0000:d9:00.0: mlx_0_0 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:37:31.782 Found net devices under 0000:d9:00.1: mlx_0_1 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:37:31.782 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:31.782 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:37:31.782 altname enp217s0f0np0 00:37:31.782 altname ens818f0np0 00:37:31.782 inet 192.168.100.8/24 scope global mlx_0_0 00:37:31.782 valid_lft forever preferred_lft forever 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:37:31.782 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:31.782 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:37:31.782 altname enp217s0f1np1 00:37:31.782 altname ens818f1np1 00:37:31.782 inet 192.168.100.9/24 scope global mlx_0_1 00:37:31.782 valid_lft forever preferred_lft forever 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:31.782 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:37:32.043 192.168.100.9' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:37:32.043 192.168.100.9' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:37:32.043 192.168.100.9' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:32.043 20:58:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:32.043 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:32.043 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:32.043 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:32.043 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:32.043 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:32.043 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:32.043 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:32.043 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:37:32.044 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:37:32.044 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:32.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:32.044 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:32.044 ' 00:37:34.625 [2024-07-26 20:58:22.847965] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ed5da0/0x1e58480) succeed. 00:37:34.625 [2024-07-26 20:58:22.857741] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed72a0/0x1ed84c0) succeed. 00:37:36.003 [2024-07-26 20:58:24.119891] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:37:37.905 [2024-07-26 20:58:26.351017] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:37:39.806 [2024-07-26 20:58:28.277447] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:37:41.711 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:41.711 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:41.711 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:41.711 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:41.711 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:41.711 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:41.711 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:41.711 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:41.711 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:41.711 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:41.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:41.711 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:41.711 20:58:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:41.711 20:58:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:41.711 20:58:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 20:58:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:41.711 20:58:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:41.711 20:58:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 20:58:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:37:41.711 20:58:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:41.971 20:58:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:41.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:41.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:41.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:41.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:37:41.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:37:41.971 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:41.971 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:41.971 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:41.971 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:41.971 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:41.971 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:41.971 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:41.971 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:41.971 ' 00:37:47.242 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:47.242 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:47.242 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:47.242 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:47.242 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:37:47.242 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:37:47.242 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:47.242 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:47.242 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:47.242 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:47.242 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:47.242 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:47.242 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:47.242 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:47.242 20:58:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:47.242 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:47.242 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:47.242 20:58:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1361696 00:37:47.242 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 1361696 ']' 00:37:47.242 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 1361696 00:37:47.242 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:37:47.242 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1361696 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1361696' 00:37:47.243 killing process with pid 1361696 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 1361696 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 1361696 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:37:47.243 rmmod nvme_rdma 00:37:47.243 rmmod nvme_fabrics 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:37:47.243 00:37:47.243 real 0m24.866s 00:37:47.243 user 0m53.135s 00:37:47.243 sys 0m7.499s 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:47.243 20:58:35 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:47.243 ************************************ 00:37:47.243 END TEST spdkcli_nvmf_rdma 00:37:47.243 ************************************ 00:37:47.243 20:58:35 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@318 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@322 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@336 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@349 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@353 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@362 -- # '[' 0 -eq 1 ']' 00:37:47.243 20:58:35 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:37:47.243 20:58:35 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:37:47.243 20:58:35 -- spdk/autotest.sh@377 -- # [[ 0 -eq 1 ]] 00:37:47.243 20:58:35 -- spdk/autotest.sh@382 -- # trap - SIGINT SIGTERM EXIT 00:37:47.243 20:58:35 -- spdk/autotest.sh@384 -- # timing_enter post_cleanup 00:37:47.243 20:58:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:47.243 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:37:47.243 20:58:35 -- spdk/autotest.sh@385 -- # autotest_cleanup 00:37:47.243 20:58:35 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:47.243 20:58:35 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:47.243 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:37:53.806 INFO: APP EXITING 00:37:53.806 INFO: killing all VMs 00:37:53.806 INFO: killing vhost app 00:37:53.806 WARN: no vhost pid file found 00:37:53.806 INFO: EXIT DONE 00:37:57.127 Waiting for block devices as requested 00:37:57.127 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:57.127 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:57.127 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:57.127 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:57.127 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:57.127 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:57.127 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:57.386 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:57.386 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:57.386 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:57.645 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:57.645 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:57.645 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:57.903 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:57.903 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:57.903 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:58.163 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:02.356 Cleaning 00:38:02.356 Removing: /var/run/dpdk/spdk0/config 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:02.356 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:02.356 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:02.356 Removing: /var/run/dpdk/spdk1/config 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:02.356 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:02.356 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:02.356 Removing: /var/run/dpdk/spdk1/mp_socket 00:38:02.356 Removing: /var/run/dpdk/spdk2/config 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:02.356 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:02.356 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:02.356 Removing: /var/run/dpdk/spdk3/config 00:38:02.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:02.356 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:02.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:02.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:02.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:02.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:02.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:02.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:02.357 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:02.357 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:02.357 Removing: /var/run/dpdk/spdk4/config 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:02.357 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:02.357 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:02.357 Removing: /dev/shm/bdevperf_trace.pid1248714 00:38:02.357 Removing: /dev/shm/bdevperf_trace.pid965684 00:38:02.357 Removing: /dev/shm/bdev_svc_trace.1 00:38:02.357 Removing: /dev/shm/nvmf_trace.0 00:38:02.357 Removing: /dev/shm/spdk_tgt_trace.pid917114 00:38:02.616 Removing: /var/run/dpdk/spdk0 00:38:02.616 Removing: /var/run/dpdk/spdk1 00:38:02.616 Removing: /var/run/dpdk/spdk2 00:38:02.616 Removing: /var/run/dpdk/spdk3 00:38:02.616 Removing: /var/run/dpdk/spdk4 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1021032 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1025437 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1126319 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1132376 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1138840 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1149169 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1197418 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1202487 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1246674 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1247634 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1248714 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1253743 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1262750 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1263725 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1264531 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1265566 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1265862 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1271094 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1271107 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1276280 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1276825 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1277446 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1278009 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1278165 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1280402 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1282254 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1284103 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1285952 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1287805 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1289665 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1296521 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1297156 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1299956 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1300955 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1308671 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1311372 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1317501 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1328244 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1328258 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1350104 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1350370 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1356707 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1357261 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1359192 00:38:02.616 Removing: /var/run/dpdk/spdk_pid1361696 00:38:02.616 Removing: /var/run/dpdk/spdk_pid913850 00:38:02.616 Removing: /var/run/dpdk/spdk_pid915584 00:38:02.616 Removing: /var/run/dpdk/spdk_pid917114 00:38:02.616 Removing: /var/run/dpdk/spdk_pid917684 00:38:02.616 Removing: /var/run/dpdk/spdk_pid918653 00:38:02.875 Removing: /var/run/dpdk/spdk_pid918929 00:38:02.875 Removing: /var/run/dpdk/spdk_pid919935 00:38:02.875 Removing: /var/run/dpdk/spdk_pid920064 00:38:02.875 Removing: /var/run/dpdk/spdk_pid920433 00:38:02.875 Removing: /var/run/dpdk/spdk_pid926018 00:38:02.875 Removing: /var/run/dpdk/spdk_pid927541 00:38:02.875 Removing: /var/run/dpdk/spdk_pid927851 00:38:02.875 Removing: /var/run/dpdk/spdk_pid928199 00:38:02.875 Removing: /var/run/dpdk/spdk_pid928561 00:38:02.875 Removing: /var/run/dpdk/spdk_pid928865 00:38:02.875 Removing: /var/run/dpdk/spdk_pid929060 00:38:02.875 Removing: /var/run/dpdk/spdk_pid929344 00:38:02.875 Removing: /var/run/dpdk/spdk_pid929650 00:38:02.875 Removing: /var/run/dpdk/spdk_pid930502 00:38:02.875 Removing: /var/run/dpdk/spdk_pid933399 00:38:02.875 Removing: /var/run/dpdk/spdk_pid933697 00:38:02.875 Removing: /var/run/dpdk/spdk_pid933994 00:38:02.875 Removing: /var/run/dpdk/spdk_pid934253 00:38:02.875 Removing: /var/run/dpdk/spdk_pid934821 00:38:02.875 Removing: /var/run/dpdk/spdk_pid934858 00:38:02.875 Removing: /var/run/dpdk/spdk_pid935406 00:38:02.875 Removing: /var/run/dpdk/spdk_pid935669 00:38:02.875 Removing: /var/run/dpdk/spdk_pid935970 00:38:02.875 Removing: /var/run/dpdk/spdk_pid936047 00:38:02.875 Removing: /var/run/dpdk/spdk_pid936277 00:38:02.875 Removing: /var/run/dpdk/spdk_pid936538 00:38:02.875 Removing: /var/run/dpdk/spdk_pid936919 00:38:02.875 Removing: /var/run/dpdk/spdk_pid937202 00:38:02.875 Removing: /var/run/dpdk/spdk_pid937522 00:38:02.875 Removing: /var/run/dpdk/spdk_pid942391 00:38:02.875 Removing: /var/run/dpdk/spdk_pid947223 00:38:02.875 Removing: /var/run/dpdk/spdk_pid958762 00:38:02.875 Removing: /var/run/dpdk/spdk_pid960061 00:38:02.875 Removing: /var/run/dpdk/spdk_pid965684 00:38:02.875 Removing: /var/run/dpdk/spdk_pid965971 00:38:02.875 Removing: /var/run/dpdk/spdk_pid970978 00:38:02.875 Removing: /var/run/dpdk/spdk_pid977617 00:38:02.875 Removing: /var/run/dpdk/spdk_pid980343 00:38:02.875 Removing: /var/run/dpdk/spdk_pid992030 00:38:02.875 Clean 00:38:03.134 20:58:51 -- common/autotest_common.sh@1451 -- # return 0 00:38:03.134 20:58:51 -- spdk/autotest.sh@386 -- # timing_exit post_cleanup 00:38:03.134 20:58:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:03.134 20:58:51 -- common/autotest_common.sh@10 -- # set +x 00:38:03.134 20:58:51 -- spdk/autotest.sh@388 -- # timing_exit autotest 00:38:03.134 20:58:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:03.134 20:58:51 -- common/autotest_common.sh@10 -- # set +x 00:38:03.134 20:58:51 -- spdk/autotest.sh@389 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:38:03.134 20:58:51 -- spdk/autotest.sh@391 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:38:03.134 20:58:51 -- spdk/autotest.sh@391 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:38:03.134 20:58:51 -- spdk/autotest.sh@393 -- # hash lcov 00:38:03.134 20:58:51 -- spdk/autotest.sh@393 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:03.134 20:58:51 -- spdk/autotest.sh@395 -- # hostname 00:38:03.134 20:58:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:38:03.393 geninfo: WARNING: invalid characters removed from testname! 00:38:25.329 20:59:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:25.330 20:59:13 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:26.708 20:59:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:28.614 20:59:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:29.992 20:59:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:31.901 20:59:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:33.280 20:59:21 -- spdk/autotest.sh@402 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:33.280 20:59:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:33.280 20:59:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:33.280 20:59:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.280 20:59:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.280 20:59:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.280 20:59:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.280 20:59:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.280 20:59:21 -- paths/export.sh@5 -- $ export PATH 00:38:33.280 20:59:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.280 20:59:21 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:38:33.280 20:59:21 -- common/autobuild_common.sh@447 -- $ date +%s 00:38:33.280 20:59:21 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722020361.XXXXXX 00:38:33.280 20:59:21 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722020361.S5ZCds 00:38:33.280 20:59:21 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:38:33.280 20:59:21 -- common/autobuild_common.sh@453 -- $ '[' -n v23.11 ']' 00:38:33.280 20:59:21 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:38:33.280 20:59:21 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:38:33.280 20:59:21 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:33.280 20:59:21 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:33.280 20:59:21 -- common/autobuild_common.sh@463 -- $ get_config_params 00:38:33.280 20:59:21 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:38:33.280 20:59:21 -- common/autotest_common.sh@10 -- $ set +x 00:38:33.280 20:59:21 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:38:33.280 20:59:21 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:38:33.280 20:59:21 -- pm/common@17 -- $ local monitor 00:38:33.280 20:59:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:33.280 20:59:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:33.280 20:59:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:33.280 20:59:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:33.280 20:59:21 -- pm/common@25 -- $ sleep 1 00:38:33.280 20:59:21 -- pm/common@21 -- $ date +%s 00:38:33.280 20:59:21 -- pm/common@21 -- $ date +%s 00:38:33.280 20:59:21 -- pm/common@21 -- $ date +%s 00:38:33.280 20:59:21 -- pm/common@21 -- $ date +%s 00:38:33.280 20:59:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722020361 00:38:33.280 20:59:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722020361 00:38:33.280 20:59:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722020361 00:38:33.280 20:59:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1722020361 00:38:33.280 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722020361_collect-vmstat.pm.log 00:38:33.280 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722020361_collect-cpu-load.pm.log 00:38:33.280 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722020361_collect-cpu-temp.pm.log 00:38:33.280 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1722020361_collect-bmc-pm.bmc.pm.log 00:38:34.219 20:59:22 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:38:34.219 20:59:22 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:38:34.219 20:59:22 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:38:34.219 20:59:22 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:34.219 20:59:22 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:34.219 20:59:22 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:34.219 20:59:22 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:34.219 20:59:22 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:34.219 20:59:22 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:34.219 20:59:22 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:38:34.219 20:59:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:34.219 20:59:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:34.219 20:59:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:34.219 20:59:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:34.219 20:59:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:34.219 20:59:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:34.219 20:59:22 -- pm/common@44 -- $ pid=1382945 00:38:34.219 20:59:22 -- pm/common@50 -- $ kill -TERM 1382945 00:38:34.219 20:59:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:34.219 20:59:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:34.219 20:59:22 -- pm/common@44 -- $ pid=1382946 00:38:34.219 20:59:22 -- pm/common@50 -- $ kill -TERM 1382946 00:38:34.219 20:59:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:34.219 20:59:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:34.219 20:59:22 -- pm/common@44 -- $ pid=1382948 00:38:34.219 20:59:22 -- pm/common@50 -- $ kill -TERM 1382948 00:38:34.219 20:59:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:34.219 20:59:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:34.219 20:59:22 -- pm/common@44 -- $ pid=1382971 00:38:34.219 20:59:22 -- pm/common@50 -- $ sudo -E kill -TERM 1382971 00:38:34.478 + [[ -n 783165 ]] 00:38:34.478 + sudo kill 783165 00:38:34.488 [Pipeline] } 00:38:34.508 [Pipeline] // stage 00:38:34.513 [Pipeline] } 00:38:34.530 [Pipeline] // timeout 00:38:34.536 [Pipeline] } 00:38:34.552 [Pipeline] // catchError 00:38:34.557 [Pipeline] } 00:38:34.574 [Pipeline] // wrap 00:38:34.580 [Pipeline] } 00:38:34.595 [Pipeline] // catchError 00:38:34.604 [Pipeline] stage 00:38:34.606 [Pipeline] { (Epilogue) 00:38:34.619 [Pipeline] catchError 00:38:34.621 [Pipeline] { 00:38:34.635 [Pipeline] echo 00:38:34.637 Cleanup processes 00:38:34.642 [Pipeline] sh 00:38:34.925 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:38:34.926 1383048 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:38:34.926 1383394 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:38:34.937 [Pipeline] sh 00:38:35.219 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:38:35.220 ++ grep -v 'sudo pgrep' 00:38:35.220 ++ awk '{print $1}' 00:38:35.220 + sudo kill -9 1383048 00:38:35.231 [Pipeline] sh 00:38:35.513 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:35.513 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:38:40.786 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:38:44.989 [Pipeline] sh 00:38:45.274 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:45.274 Artifacts sizes are good 00:38:45.314 [Pipeline] archiveArtifacts 00:38:45.321 Archiving artifacts 00:38:45.487 [Pipeline] sh 00:38:45.785 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:38:45.798 [Pipeline] cleanWs 00:38:45.807 [WS-CLEANUP] Deleting project workspace... 00:38:45.807 [WS-CLEANUP] Deferred wipeout is used... 00:38:45.813 [WS-CLEANUP] done 00:38:45.814 [Pipeline] } 00:38:45.834 [Pipeline] // catchError 00:38:45.846 [Pipeline] sh 00:38:46.127 + logger -p user.info -t JENKINS-CI 00:38:46.137 [Pipeline] } 00:38:46.155 [Pipeline] // stage 00:38:46.161 [Pipeline] } 00:38:46.179 [Pipeline] // node 00:38:46.186 [Pipeline] End of Pipeline 00:38:46.237 Finished: SUCCESS